00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 979 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3641 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.151 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.156 The recommended git tool is: git 00:00:00.156 using credential 00000000-0000-0000-0000-000000000002 00:00:00.160 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.216 Fetching changes from the remote Git repository 00:00:00.220 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.255 Using shallow fetch with depth 1 00:00:00.255 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.255 > git --version # timeout=10 00:00:00.287 > git --version # 'git version 2.39.2' 00:00:00.287 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.303 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.303 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.833 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.844 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.855 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:07.855 > git config core.sparsecheckout # timeout=10 00:00:07.870 > git read-tree -mu HEAD # timeout=10 00:00:07.887 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:07.910 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:07.910 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:08.020 [Pipeline] Start of Pipeline 00:00:08.033 [Pipeline] library 00:00:08.035 Loading library shm_lib@master 00:00:08.035 Library shm_lib@master is cached. Copying from home. 00:00:08.048 [Pipeline] node 00:00:08.061 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:08.063 [Pipeline] { 00:00:08.072 [Pipeline] catchError 00:00:08.073 [Pipeline] { 00:00:08.081 [Pipeline] wrap 00:00:08.088 [Pipeline] { 00:00:08.093 [Pipeline] stage 00:00:08.095 [Pipeline] { (Prologue) 00:00:08.108 [Pipeline] echo 00:00:08.109 Node: VM-host-SM38 00:00:08.113 [Pipeline] cleanWs 00:00:08.122 [WS-CLEANUP] Deleting project workspace... 00:00:08.122 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.130 [WS-CLEANUP] done 00:00:08.306 [Pipeline] setCustomBuildProperty 00:00:08.393 [Pipeline] httpRequest 00:00:08.932 [Pipeline] echo 00:00:08.933 Sorcerer 10.211.164.20 is alive 00:00:08.943 [Pipeline] retry 00:00:08.944 [Pipeline] { 00:00:08.957 [Pipeline] httpRequest 00:00:08.962 HttpMethod: GET 00:00:08.963 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.964 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.987 Response Code: HTTP/1.1 200 OK 00:00:08.988 Success: Status code 200 is in the accepted range: 200,404 00:00:08.989 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:30.571 [Pipeline] } 00:00:30.587 [Pipeline] // retry 00:00:30.594 [Pipeline] sh 00:00:30.879 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:30.897 [Pipeline] httpRequest 00:00:31.275 [Pipeline] echo 00:00:31.276 Sorcerer 10.211.164.20 is alive 00:00:31.284 [Pipeline] retry 00:00:31.285 [Pipeline] { 00:00:31.297 [Pipeline] httpRequest 00:00:31.302 HttpMethod: GET 00:00:31.302 URL: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:31.303 Sending request to url: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:31.324 Response Code: HTTP/1.1 200 OK 00:00:31.324 Success: Status code 200 is in the accepted range: 200,404 00:00:31.325 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:01:47.365 [Pipeline] } 00:01:47.381 [Pipeline] // retry 00:01:47.389 [Pipeline] sh 00:01:47.674 + tar --no-same-owner -xf spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:01:50.995 [Pipeline] sh 00:01:51.320 + git -C spdk log --oneline -n5 00:01:51.320 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:01:51.320 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:01:51.320 4bcab9fb9 correct kick for CQ full case 00:01:51.320 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:01:51.320 318515b44 nvme/perf: interrupt mode support for pcie controller 00:01:51.343 [Pipeline] withCredentials 00:01:51.355 > git --version # timeout=10 00:01:51.368 > git --version # 'git version 2.39.2' 00:01:51.387 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:51.389 [Pipeline] { 00:01:51.396 [Pipeline] retry 00:01:51.398 [Pipeline] { 00:01:51.409 [Pipeline] sh 00:01:51.689 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:51.962 [Pipeline] } 00:01:51.977 [Pipeline] // retry 00:01:51.982 [Pipeline] } 00:01:51.998 [Pipeline] // withCredentials 00:01:52.008 [Pipeline] httpRequest 00:01:52.391 [Pipeline] echo 00:01:52.393 Sorcerer 10.211.164.20 is alive 00:01:52.403 [Pipeline] retry 00:01:52.405 [Pipeline] { 00:01:52.420 [Pipeline] httpRequest 00:01:52.426 HttpMethod: GET 00:01:52.426 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:52.427 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:52.435 Response Code: HTTP/1.1 200 OK 00:01:52.436 Success: Status code 200 is in the accepted range: 200,404 00:01:52.436 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:00.373 [Pipeline] } 00:02:00.390 [Pipeline] // retry 00:02:00.398 [Pipeline] sh 00:02:00.685 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:02.618 [Pipeline] sh 00:02:02.903 + git -C dpdk log --oneline -n5 00:02:02.903 eeb0605f11 version: 23.11.0 00:02:02.903 238778122a doc: update release notes for 23.11 00:02:02.903 46aa6b3cfc doc: fix description of RSS features 00:02:02.903 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:02.903 7e421ae345 devtools: support skipping forbid rule check 00:02:02.922 [Pipeline] writeFile 00:02:02.936 [Pipeline] sh 00:02:03.222 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:03.236 [Pipeline] sh 00:02:03.521 + cat autorun-spdk.conf 00:02:03.521 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:03.521 SPDK_TEST_NVME=1 00:02:03.521 SPDK_TEST_FTL=1 00:02:03.521 SPDK_TEST_ISAL=1 00:02:03.521 SPDK_RUN_ASAN=1 00:02:03.521 SPDK_RUN_UBSAN=1 00:02:03.521 SPDK_TEST_XNVME=1 00:02:03.521 SPDK_TEST_NVME_FDP=1 00:02:03.521 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:03.521 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:03.521 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:03.529 RUN_NIGHTLY=1 00:02:03.531 [Pipeline] } 00:02:03.545 [Pipeline] // stage 00:02:03.560 [Pipeline] stage 00:02:03.562 [Pipeline] { (Run VM) 00:02:03.575 [Pipeline] sh 00:02:03.862 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:03.862 + echo 'Start stage prepare_nvme.sh' 00:02:03.862 Start stage prepare_nvme.sh 00:02:03.862 + [[ -n 10 ]] 00:02:03.862 + disk_prefix=ex10 00:02:03.862 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:02:03.862 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:02:03.862 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:02:03.862 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:03.862 ++ SPDK_TEST_NVME=1 00:02:03.862 ++ SPDK_TEST_FTL=1 00:02:03.862 ++ SPDK_TEST_ISAL=1 00:02:03.862 ++ SPDK_RUN_ASAN=1 00:02:03.862 ++ SPDK_RUN_UBSAN=1 00:02:03.862 ++ SPDK_TEST_XNVME=1 00:02:03.862 ++ SPDK_TEST_NVME_FDP=1 00:02:03.862 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:03.862 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:03.862 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:03.862 ++ RUN_NIGHTLY=1 00:02:03.862 + cd /var/jenkins/workspace/nvme-vg-autotest 00:02:03.862 + nvme_files=() 00:02:03.862 + declare -A nvme_files 00:02:03.862 + backend_dir=/var/lib/libvirt/images/backends 00:02:03.862 + nvme_files['nvme.img']=5G 00:02:03.862 + nvme_files['nvme-cmb.img']=5G 00:02:03.862 + nvme_files['nvme-multi0.img']=4G 00:02:03.862 + nvme_files['nvme-multi1.img']=4G 00:02:03.862 + nvme_files['nvme-multi2.img']=4G 00:02:03.862 + nvme_files['nvme-openstack.img']=8G 00:02:03.862 + nvme_files['nvme-zns.img']=5G 00:02:03.862 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:03.862 + (( SPDK_TEST_FTL == 1 )) 00:02:03.862 + nvme_files["nvme-ftl.img"]=6G 00:02:03.862 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:03.862 + nvme_files["nvme-fdp.img"]=1G 00:02:03.862 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:03.862 + for nvme in "${!nvme_files[@]}" 00:02:03.862 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi2.img -s 4G 00:02:03.862 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:03.862 + for nvme in "${!nvme_files[@]}" 00:02:03.862 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-ftl.img -s 6G 00:02:04.807 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:04.807 + for nvme in "${!nvme_files[@]}" 00:02:04.807 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-cmb.img -s 5G 00:02:04.807 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:04.807 + for nvme in "${!nvme_files[@]}" 00:02:04.807 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-openstack.img -s 8G 00:02:04.807 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:04.807 + for nvme in "${!nvme_files[@]}" 00:02:04.807 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-zns.img -s 5G 00:02:04.807 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:04.807 + for nvme in "${!nvme_files[@]}" 00:02:04.807 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi1.img -s 4G 00:02:04.807 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:04.807 + for nvme in "${!nvme_files[@]}" 00:02:04.807 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi0.img -s 4G 00:02:05.076 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:05.076 + for nvme in "${!nvme_files[@]}" 00:02:05.076 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-fdp.img -s 1G 00:02:05.076 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:05.076 + for nvme in "${!nvme_files[@]}" 00:02:05.076 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme.img -s 5G 00:02:05.649 Formatting '/var/lib/libvirt/images/backends/ex10-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:05.649 ++ sudo grep -rl ex10-nvme.img /etc/libvirt/qemu 00:02:05.650 + echo 'End stage prepare_nvme.sh' 00:02:05.650 End stage prepare_nvme.sh 00:02:05.664 [Pipeline] sh 00:02:05.990 + DISTRO=fedora39 00:02:05.990 + CPUS=10 00:02:05.990 + RAM=12288 00:02:05.990 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:05.990 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex10-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex10-nvme.img -b /var/lib/libvirt/images/backends/ex10-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex10-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:05.990 00:02:05.990 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:02:05.990 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:02:05.990 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:02:05.990 HELP=0 00:02:05.990 DRY_RUN=0 00:02:05.990 NVME_FILE=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,/var/lib/libvirt/images/backends/ex10-nvme.img,/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,/var/lib/libvirt/images/backends/ex10-nvme-fdp.img, 00:02:05.990 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:05.990 NVME_AUTO_CREATE=0 00:02:05.990 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,, 00:02:05.990 NVME_CMB=,,,, 00:02:05.990 NVME_PMR=,,,, 00:02:05.990 NVME_ZNS=,,,, 00:02:05.990 NVME_MS=true,,,, 00:02:05.990 NVME_FDP=,,,on, 00:02:05.990 SPDK_VAGRANT_DISTRO=fedora39 00:02:05.990 SPDK_VAGRANT_VMCPU=10 00:02:05.990 SPDK_VAGRANT_VMRAM=12288 00:02:05.990 SPDK_VAGRANT_PROVIDER=libvirt 00:02:05.990 SPDK_VAGRANT_HTTP_PROXY= 00:02:05.990 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:05.990 SPDK_OPENSTACK_NETWORK=0 00:02:05.990 VAGRANT_PACKAGE_BOX=0 00:02:05.990 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:05.990 FORCE_DISTRO=true 00:02:05.990 VAGRANT_BOX_VERSION= 00:02:05.990 EXTRA_VAGRANTFILES= 00:02:05.990 NIC_MODEL=e1000 00:02:05.990 00:02:05.990 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:02:05.990 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:02:08.538 Bringing machine 'default' up with 'libvirt' provider... 00:02:08.800 ==> default: Creating image (snapshot of base box volume). 00:02:09.062 ==> default: Creating domain with the following settings... 00:02:09.062 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731911761_69b77fb6846c44a47c6c 00:02:09.062 ==> default: -- Domain type: kvm 00:02:09.062 ==> default: -- Cpus: 10 00:02:09.062 ==> default: -- Feature: acpi 00:02:09.062 ==> default: -- Feature: apic 00:02:09.062 ==> default: -- Feature: pae 00:02:09.062 ==> default: -- Memory: 12288M 00:02:09.062 ==> default: -- Memory Backing: hugepages: 00:02:09.062 ==> default: -- Management MAC: 00:02:09.062 ==> default: -- Loader: 00:02:09.062 ==> default: -- Nvram: 00:02:09.062 ==> default: -- Base box: spdk/fedora39 00:02:09.062 ==> default: -- Storage pool: default 00:02:09.062 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731911761_69b77fb6846c44a47c6c.img (20G) 00:02:09.062 ==> default: -- Volume Cache: default 00:02:09.062 ==> default: -- Kernel: 00:02:09.062 ==> default: -- Initrd: 00:02:09.062 ==> default: -- Graphics Type: vnc 00:02:09.062 ==> default: -- Graphics Port: -1 00:02:09.062 ==> default: -- Graphics IP: 127.0.0.1 00:02:09.062 ==> default: -- Graphics Password: Not defined 00:02:09.062 ==> default: -- Video Type: cirrus 00:02:09.062 ==> default: -- Video VRAM: 9216 00:02:09.062 ==> default: -- Sound Type: 00:02:09.062 ==> default: -- Keymap: en-us 00:02:09.062 ==> default: -- TPM Path: 00:02:09.062 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:09.062 ==> default: -- Command line args: 00:02:09.062 ==> default: -> value=-device, 00:02:09.062 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:09.062 ==> default: -> value=-drive, 00:02:09.062 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:09.062 ==> default: -> value=-device, 00:02:09.062 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:09.062 ==> default: -> value=-device, 00:02:09.062 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:09.062 ==> default: -> value=-drive, 00:02:09.062 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme.img,if=none,id=nvme-1-drive0, 00:02:09.062 ==> default: -> value=-device, 00:02:09.062 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:09.062 ==> default: -> value=-device, 00:02:09.062 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:09.062 ==> default: -> value=-drive, 00:02:09.063 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:09.063 ==> default: -> value=-device, 00:02:09.063 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:09.063 ==> default: -> value=-drive, 00:02:09.063 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:09.063 ==> default: -> value=-device, 00:02:09.063 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:09.063 ==> default: -> value=-drive, 00:02:09.063 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:09.063 ==> default: -> value=-device, 00:02:09.063 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:09.063 ==> default: -> value=-device, 00:02:09.063 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:09.063 ==> default: -> value=-device, 00:02:09.063 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:09.063 ==> default: -> value=-drive, 00:02:09.063 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:09.063 ==> default: -> value=-device, 00:02:09.063 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:09.324 ==> default: Creating shared folders metadata... 00:02:09.324 ==> default: Starting domain. 00:02:11.242 ==> default: Waiting for domain to get an IP address... 00:02:29.365 ==> default: Waiting for SSH to become available... 00:02:29.365 ==> default: Configuring and enabling network interfaces... 00:02:32.670 default: SSH address: 192.168.121.84:22 00:02:32.670 default: SSH username: vagrant 00:02:32.670 default: SSH auth method: private key 00:02:34.606 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:42.744 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:46.925 ==> default: Mounting SSHFS shared folder... 00:02:48.823 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:48.823 ==> default: Checking Mount.. 00:02:49.757 ==> default: Folder Successfully Mounted! 00:02:49.757 00:02:49.757 SUCCESS! 00:02:49.757 00:02:49.757 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:49.757 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:49.757 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:49.757 00:02:49.766 [Pipeline] } 00:02:49.784 [Pipeline] // stage 00:02:49.794 [Pipeline] dir 00:02:49.794 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:49.796 [Pipeline] { 00:02:49.810 [Pipeline] catchError 00:02:49.812 [Pipeline] { 00:02:49.825 [Pipeline] sh 00:02:50.104 + vagrant ssh-config --host vagrant 00:02:50.104 + tee ssh_conf 00:02:50.104 + sed -ne '/^Host/,$p' 00:02:52.661 Host vagrant 00:02:52.662 HostName 192.168.121.84 00:02:52.662 User vagrant 00:02:52.662 Port 22 00:02:52.662 UserKnownHostsFile /dev/null 00:02:52.662 StrictHostKeyChecking no 00:02:52.662 PasswordAuthentication no 00:02:52.662 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:52.662 IdentitiesOnly yes 00:02:52.662 LogLevel FATAL 00:02:52.662 ForwardAgent yes 00:02:52.662 ForwardX11 yes 00:02:52.662 00:02:52.674 [Pipeline] withEnv 00:02:52.677 [Pipeline] { 00:02:52.689 [Pipeline] sh 00:02:52.967 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:52.967 source /etc/os-release 00:02:52.967 [[ -e /image.version ]] && img=$(< /image.version) 00:02:52.967 # Minimal, systemd-like check. 00:02:52.967 if [[ -e /.dockerenv ]]; then 00:02:52.967 # Clear garbage from the node'\''s name: 00:02:52.967 # agt-er_autotest_547-896 -> autotest_547-896 00:02:52.967 # $HOSTNAME is the actual container id 00:02:52.967 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:52.967 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:52.967 # We can assume this is a mount from a host where container is running, 00:02:52.967 # so fetch its hostname to easily identify the target swarm worker. 00:02:52.967 container="$(< /etc/hostname) ($agent)" 00:02:52.967 else 00:02:52.967 # Fallback 00:02:52.967 container=$agent 00:02:52.967 fi 00:02:52.967 fi 00:02:52.967 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:52.967 ' 00:02:52.977 [Pipeline] } 00:02:52.992 [Pipeline] // withEnv 00:02:53.000 [Pipeline] setCustomBuildProperty 00:02:53.015 [Pipeline] stage 00:02:53.017 [Pipeline] { (Tests) 00:02:53.033 [Pipeline] sh 00:02:53.311 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:53.582 [Pipeline] sh 00:02:53.860 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:53.876 [Pipeline] timeout 00:02:53.876 Timeout set to expire in 50 min 00:02:53.878 [Pipeline] { 00:02:53.893 [Pipeline] sh 00:02:54.171 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:54.429 HEAD is now at 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:02:54.443 [Pipeline] sh 00:02:54.728 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:54.744 [Pipeline] sh 00:02:55.024 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:55.041 [Pipeline] sh 00:02:55.320 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:55.320 ++ readlink -f spdk_repo 00:02:55.320 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:55.320 + [[ -n /home/vagrant/spdk_repo ]] 00:02:55.320 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:55.320 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:55.320 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:55.320 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:55.320 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:55.320 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:55.320 + cd /home/vagrant/spdk_repo 00:02:55.320 + source /etc/os-release 00:02:55.320 ++ NAME='Fedora Linux' 00:02:55.320 ++ VERSION='39 (Cloud Edition)' 00:02:55.320 ++ ID=fedora 00:02:55.320 ++ VERSION_ID=39 00:02:55.320 ++ VERSION_CODENAME= 00:02:55.320 ++ PLATFORM_ID=platform:f39 00:02:55.320 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:55.320 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:55.320 ++ LOGO=fedora-logo-icon 00:02:55.320 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:55.320 ++ HOME_URL=https://fedoraproject.org/ 00:02:55.320 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:55.320 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:55.320 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:55.320 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:55.320 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:55.320 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:55.320 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:55.320 ++ SUPPORT_END=2024-11-12 00:02:55.320 ++ VARIANT='Cloud Edition' 00:02:55.320 ++ VARIANT_ID=cloud 00:02:55.320 + uname -a 00:02:55.578 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:55.578 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:55.836 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:56.095 Hugepages 00:02:56.095 node hugesize free / total 00:02:56.095 node0 1048576kB 0 / 0 00:02:56.095 node0 2048kB 0 / 0 00:02:56.095 00:02:56.095 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:56.095 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:56.095 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:56.095 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:56.095 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:56.095 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:56.095 + rm -f /tmp/spdk-ld-path 00:02:56.095 + source autorun-spdk.conf 00:02:56.095 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:56.095 ++ SPDK_TEST_NVME=1 00:02:56.095 ++ SPDK_TEST_FTL=1 00:02:56.095 ++ SPDK_TEST_ISAL=1 00:02:56.095 ++ SPDK_RUN_ASAN=1 00:02:56.095 ++ SPDK_RUN_UBSAN=1 00:02:56.095 ++ SPDK_TEST_XNVME=1 00:02:56.095 ++ SPDK_TEST_NVME_FDP=1 00:02:56.095 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:56.095 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:56.095 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:56.095 ++ RUN_NIGHTLY=1 00:02:56.095 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:56.095 + [[ -n '' ]] 00:02:56.095 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:56.095 + for M in /var/spdk/build-*-manifest.txt 00:02:56.095 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:56.095 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:56.095 + for M in /var/spdk/build-*-manifest.txt 00:02:56.095 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:56.095 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:56.095 + for M in /var/spdk/build-*-manifest.txt 00:02:56.095 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:56.095 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:56.095 ++ uname 00:02:56.095 + [[ Linux == \L\i\n\u\x ]] 00:02:56.095 + sudo dmesg -T 00:02:56.095 + sudo dmesg --clear 00:02:56.095 + dmesg_pid=5767 00:02:56.095 + [[ Fedora Linux == FreeBSD ]] 00:02:56.095 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:56.095 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:56.095 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:56.095 + [[ -x /usr/src/fio-static/fio ]] 00:02:56.095 + sudo dmesg -Tw 00:02:56.095 + export FIO_BIN=/usr/src/fio-static/fio 00:02:56.095 + FIO_BIN=/usr/src/fio-static/fio 00:02:56.095 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:56.095 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:56.095 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:56.095 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:56.095 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:56.095 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:56.095 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:56.095 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:56.095 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:56.095 06:36:49 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:56.095 06:36:49 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:56.095 06:36:49 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:56.095 06:36:49 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:56.095 06:36:49 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:56.095 06:36:49 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:56.095 06:36:49 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:56.095 06:36:49 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:56.095 06:36:49 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:56.095 06:36:49 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:56.095 06:36:49 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:56.095 06:36:49 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:56.095 06:36:49 -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:56.095 06:36:49 -- spdk_repo/autorun-spdk.conf@12 -- $ RUN_NIGHTLY=1 00:02:56.095 06:36:49 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:56.095 06:36:49 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:56.352 06:36:49 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:56.352 06:36:49 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:56.352 06:36:49 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:56.352 06:36:49 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:56.352 06:36:49 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:56.352 06:36:49 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:56.352 06:36:49 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.352 06:36:49 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.352 06:36:49 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.352 06:36:49 -- paths/export.sh@5 -- $ export PATH 00:02:56.352 06:36:49 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.352 06:36:49 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:56.352 06:36:49 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:56.352 06:36:49 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731911809.XXXXXX 00:02:56.352 06:36:49 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731911809.yLAXjT 00:02:56.352 06:36:49 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:56.352 06:36:49 -- common/autobuild_common.sh@492 -- $ '[' -n v23.11 ']' 00:02:56.352 06:36:49 -- common/autobuild_common.sh@493 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:56.352 06:36:49 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:56.352 06:36:49 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:56.352 06:36:49 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:56.352 06:36:49 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:56.352 06:36:49 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:56.352 06:36:49 -- common/autotest_common.sh@10 -- $ set +x 00:02:56.352 06:36:49 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:56.352 06:36:49 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:56.352 06:36:49 -- pm/common@17 -- $ local monitor 00:02:56.352 06:36:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:56.352 06:36:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:56.352 06:36:49 -- pm/common@25 -- $ sleep 1 00:02:56.352 06:36:49 -- pm/common@21 -- $ date +%s 00:02:56.352 06:36:49 -- pm/common@21 -- $ date +%s 00:02:56.352 06:36:49 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731911809 00:02:56.352 06:36:49 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731911809 00:02:56.352 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731911809_collect-vmstat.pm.log 00:02:56.352 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731911809_collect-cpu-load.pm.log 00:02:57.287 06:36:50 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:57.288 06:36:50 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:57.288 06:36:50 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:57.288 06:36:50 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:57.288 06:36:50 -- spdk/autobuild.sh@16 -- $ date -u 00:02:57.288 Mon Nov 18 06:36:50 AM UTC 2024 00:02:57.288 06:36:50 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:57.288 v25.01-pre-189-g83e8405e4 00:02:57.288 06:36:50 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:57.288 06:36:50 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:57.288 06:36:50 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:57.288 06:36:50 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:57.288 06:36:50 -- common/autotest_common.sh@10 -- $ set +x 00:02:57.288 ************************************ 00:02:57.288 START TEST asan 00:02:57.288 ************************************ 00:02:57.288 using asan 00:02:57.288 06:36:50 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:57.288 00:02:57.288 real 0m0.000s 00:02:57.288 user 0m0.000s 00:02:57.288 sys 0m0.000s 00:02:57.288 06:36:50 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:57.288 ************************************ 00:02:57.288 END TEST asan 00:02:57.288 ************************************ 00:02:57.288 06:36:50 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:57.288 06:36:50 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:57.288 06:36:50 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:57.288 06:36:50 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:57.288 06:36:50 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:57.288 06:36:50 -- common/autotest_common.sh@10 -- $ set +x 00:02:57.288 ************************************ 00:02:57.288 START TEST ubsan 00:02:57.288 ************************************ 00:02:57.288 using ubsan 00:02:57.288 06:36:50 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:57.288 00:02:57.288 real 0m0.000s 00:02:57.288 user 0m0.000s 00:02:57.288 sys 0m0.000s 00:02:57.288 06:36:50 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:57.288 ************************************ 00:02:57.288 END TEST ubsan 00:02:57.288 06:36:50 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:57.288 ************************************ 00:02:57.288 06:36:50 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:57.288 06:36:50 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:57.288 06:36:50 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:57.288 06:36:50 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:57.288 06:36:50 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:57.288 06:36:50 -- common/autotest_common.sh@10 -- $ set +x 00:02:57.288 ************************************ 00:02:57.288 START TEST build_native_dpdk 00:02:57.288 ************************************ 00:02:57.288 06:36:50 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:57.288 eeb0605f11 version: 23.11.0 00:02:57.288 238778122a doc: update release notes for 23.11 00:02:57.288 46aa6b3cfc doc: fix description of RSS features 00:02:57.288 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:57.288 7e421ae345 devtools: support skipping forbid rule check 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:57.288 06:36:50 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:57.288 06:36:50 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:57.288 06:36:50 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:57.288 06:36:50 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:57.288 06:36:50 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:57.288 06:36:50 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:57.288 06:36:50 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:57.288 06:36:50 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:57.288 06:36:50 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:57.288 06:36:50 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:57.288 06:36:50 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:57.288 06:36:50 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:57.288 06:36:50 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:57.288 06:36:50 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:57.288 06:36:50 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:57.288 06:36:50 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:57.288 06:36:50 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:57.288 06:36:50 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:57.288 06:36:50 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:57.288 06:36:50 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:57.288 06:36:50 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:57.288 06:36:50 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:57.288 06:36:50 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:57.288 06:36:50 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:57.288 06:36:50 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:57.288 06:36:50 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:57.288 06:36:50 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:57.288 patching file config/rte_config.h 00:02:57.288 Hunk #1 succeeded at 60 (offset 1 line). 00:02:57.288 06:36:50 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:57.288 06:36:50 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:57.288 06:36:50 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:57.288 06:36:50 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:57.288 06:36:50 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:57.288 06:36:50 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:57.288 06:36:50 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:57.547 06:36:50 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:57.547 patching file lib/pcapng/rte_pcapng.c 00:02:57.547 06:36:50 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:57.547 06:36:50 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:57.547 06:36:50 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:57.547 06:36:50 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:57.547 06:36:50 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:57.547 06:36:50 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:57.547 06:36:50 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:01.737 The Meson build system 00:03:01.737 Version: 1.5.0 00:03:01.737 Source dir: /home/vagrant/spdk_repo/dpdk 00:03:01.737 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:03:01.737 Build type: native build 00:03:01.737 Program cat found: YES (/usr/bin/cat) 00:03:01.737 Project name: DPDK 00:03:01.737 Project version: 23.11.0 00:03:01.737 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:01.737 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:01.737 Host machine cpu family: x86_64 00:03:01.737 Host machine cpu: x86_64 00:03:01.737 Message: ## Building in Developer Mode ## 00:03:01.737 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:01.737 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:03:01.737 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:03:01.737 Program python3 found: YES (/usr/bin/python3) 00:03:01.737 Program cat found: YES (/usr/bin/cat) 00:03:01.737 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:03:01.737 Compiler for C supports arguments -march=native: YES 00:03:01.737 Checking for size of "void *" : 8 00:03:01.737 Checking for size of "void *" : 8 (cached) 00:03:01.737 Library m found: YES 00:03:01.737 Library numa found: YES 00:03:01.737 Has header "numaif.h" : YES 00:03:01.737 Library fdt found: NO 00:03:01.737 Library execinfo found: NO 00:03:01.737 Has header "execinfo.h" : YES 00:03:01.737 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:01.737 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:01.737 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:01.737 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:01.737 Run-time dependency openssl found: YES 3.1.1 00:03:01.737 Run-time dependency libpcap found: YES 1.10.4 00:03:01.737 Has header "pcap.h" with dependency libpcap: YES 00:03:01.737 Compiler for C supports arguments -Wcast-qual: YES 00:03:01.737 Compiler for C supports arguments -Wdeprecated: YES 00:03:01.737 Compiler for C supports arguments -Wformat: YES 00:03:01.737 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:01.737 Compiler for C supports arguments -Wformat-security: NO 00:03:01.737 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:01.737 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:01.737 Compiler for C supports arguments -Wnested-externs: YES 00:03:01.737 Compiler for C supports arguments -Wold-style-definition: YES 00:03:01.737 Compiler for C supports arguments -Wpointer-arith: YES 00:03:01.737 Compiler for C supports arguments -Wsign-compare: YES 00:03:01.737 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:01.737 Compiler for C supports arguments -Wundef: YES 00:03:01.737 Compiler for C supports arguments -Wwrite-strings: YES 00:03:01.737 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:01.737 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:01.737 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:01.738 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:01.738 Program objdump found: YES (/usr/bin/objdump) 00:03:01.738 Compiler for C supports arguments -mavx512f: YES 00:03:01.738 Checking if "AVX512 checking" compiles: YES 00:03:01.738 Fetching value of define "__SSE4_2__" : 1 00:03:01.738 Fetching value of define "__AES__" : 1 00:03:01.738 Fetching value of define "__AVX__" : 1 00:03:01.738 Fetching value of define "__AVX2__" : 1 00:03:01.738 Fetching value of define "__AVX512BW__" : 1 00:03:01.738 Fetching value of define "__AVX512CD__" : 1 00:03:01.738 Fetching value of define "__AVX512DQ__" : 1 00:03:01.738 Fetching value of define "__AVX512F__" : 1 00:03:01.738 Fetching value of define "__AVX512VL__" : 1 00:03:01.738 Fetching value of define "__PCLMUL__" : 1 00:03:01.738 Fetching value of define "__RDRND__" : 1 00:03:01.738 Fetching value of define "__RDSEED__" : 1 00:03:01.738 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:01.738 Fetching value of define "__znver1__" : (undefined) 00:03:01.738 Fetching value of define "__znver2__" : (undefined) 00:03:01.738 Fetching value of define "__znver3__" : (undefined) 00:03:01.738 Fetching value of define "__znver4__" : (undefined) 00:03:01.738 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:01.738 Message: lib/log: Defining dependency "log" 00:03:01.738 Message: lib/kvargs: Defining dependency "kvargs" 00:03:01.738 Message: lib/telemetry: Defining dependency "telemetry" 00:03:01.738 Checking for function "getentropy" : NO 00:03:01.738 Message: lib/eal: Defining dependency "eal" 00:03:01.738 Message: lib/ring: Defining dependency "ring" 00:03:01.738 Message: lib/rcu: Defining dependency "rcu" 00:03:01.738 Message: lib/mempool: Defining dependency "mempool" 00:03:01.738 Message: lib/mbuf: Defining dependency "mbuf" 00:03:01.738 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:01.738 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:01.738 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:01.738 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:01.738 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:01.738 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:01.738 Compiler for C supports arguments -mpclmul: YES 00:03:01.738 Compiler for C supports arguments -maes: YES 00:03:01.738 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:01.738 Compiler for C supports arguments -mavx512bw: YES 00:03:01.738 Compiler for C supports arguments -mavx512dq: YES 00:03:01.738 Compiler for C supports arguments -mavx512vl: YES 00:03:01.738 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:01.738 Compiler for C supports arguments -mavx2: YES 00:03:01.738 Compiler for C supports arguments -mavx: YES 00:03:01.738 Message: lib/net: Defining dependency "net" 00:03:01.738 Message: lib/meter: Defining dependency "meter" 00:03:01.738 Message: lib/ethdev: Defining dependency "ethdev" 00:03:01.738 Message: lib/pci: Defining dependency "pci" 00:03:01.738 Message: lib/cmdline: Defining dependency "cmdline" 00:03:01.738 Message: lib/metrics: Defining dependency "metrics" 00:03:01.738 Message: lib/hash: Defining dependency "hash" 00:03:01.738 Message: lib/timer: Defining dependency "timer" 00:03:01.738 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:01.738 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:01.738 Fetching value of define "__AVX512CD__" : 1 (cached) 00:03:01.738 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:01.738 Message: lib/acl: Defining dependency "acl" 00:03:01.738 Message: lib/bbdev: Defining dependency "bbdev" 00:03:01.738 Message: lib/bitratestats: Defining dependency "bitratestats" 00:03:01.738 Run-time dependency libelf found: YES 0.191 00:03:01.738 Message: lib/bpf: Defining dependency "bpf" 00:03:01.738 Message: lib/cfgfile: Defining dependency "cfgfile" 00:03:01.738 Message: lib/compressdev: Defining dependency "compressdev" 00:03:01.738 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:01.738 Message: lib/distributor: Defining dependency "distributor" 00:03:01.738 Message: lib/dmadev: Defining dependency "dmadev" 00:03:01.738 Message: lib/efd: Defining dependency "efd" 00:03:01.738 Message: lib/eventdev: Defining dependency "eventdev" 00:03:01.738 Message: lib/dispatcher: Defining dependency "dispatcher" 00:03:01.738 Message: lib/gpudev: Defining dependency "gpudev" 00:03:01.738 Message: lib/gro: Defining dependency "gro" 00:03:01.738 Message: lib/gso: Defining dependency "gso" 00:03:01.738 Message: lib/ip_frag: Defining dependency "ip_frag" 00:03:01.738 Message: lib/jobstats: Defining dependency "jobstats" 00:03:01.738 Message: lib/latencystats: Defining dependency "latencystats" 00:03:01.738 Message: lib/lpm: Defining dependency "lpm" 00:03:01.738 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:01.738 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:01.738 Fetching value of define "__AVX512IFMA__" : 1 00:03:01.738 Message: lib/member: Defining dependency "member" 00:03:01.738 Message: lib/pcapng: Defining dependency "pcapng" 00:03:01.738 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:01.738 Message: lib/power: Defining dependency "power" 00:03:01.738 Message: lib/rawdev: Defining dependency "rawdev" 00:03:01.738 Message: lib/regexdev: Defining dependency "regexdev" 00:03:01.738 Message: lib/mldev: Defining dependency "mldev" 00:03:01.738 Message: lib/rib: Defining dependency "rib" 00:03:01.738 Message: lib/reorder: Defining dependency "reorder" 00:03:01.738 Message: lib/sched: Defining dependency "sched" 00:03:01.738 Message: lib/security: Defining dependency "security" 00:03:01.738 Message: lib/stack: Defining dependency "stack" 00:03:01.738 Has header "linux/userfaultfd.h" : YES 00:03:01.738 Has header "linux/vduse.h" : YES 00:03:01.738 Message: lib/vhost: Defining dependency "vhost" 00:03:01.738 Message: lib/ipsec: Defining dependency "ipsec" 00:03:01.738 Message: lib/pdcp: Defining dependency "pdcp" 00:03:01.738 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:01.738 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:01.738 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:01.738 Message: lib/fib: Defining dependency "fib" 00:03:01.738 Message: lib/port: Defining dependency "port" 00:03:01.738 Message: lib/pdump: Defining dependency "pdump" 00:03:01.738 Message: lib/table: Defining dependency "table" 00:03:01.738 Message: lib/pipeline: Defining dependency "pipeline" 00:03:01.738 Message: lib/graph: Defining dependency "graph" 00:03:01.738 Message: lib/node: Defining dependency "node" 00:03:01.738 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:01.738 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:01.738 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:01.738 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:02.675 Compiler for C supports arguments -Wno-sign-compare: YES 00:03:02.675 Compiler for C supports arguments -Wno-unused-value: YES 00:03:02.675 Compiler for C supports arguments -Wno-format: YES 00:03:02.675 Compiler for C supports arguments -Wno-format-security: YES 00:03:02.675 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:03:02.675 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:02.675 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:03:02.675 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:02.675 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:02.675 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:02.675 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:02.675 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:02.675 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:02.675 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:02.675 Has header "sys/epoll.h" : YES 00:03:02.675 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:02.675 Configuring doxy-api-html.conf using configuration 00:03:02.675 Configuring doxy-api-man.conf using configuration 00:03:02.675 Program mandb found: YES (/usr/bin/mandb) 00:03:02.675 Program sphinx-build found: NO 00:03:02.675 Configuring rte_build_config.h using configuration 00:03:02.675 Message: 00:03:02.675 ================= 00:03:02.675 Applications Enabled 00:03:02.675 ================= 00:03:02.675 00:03:02.675 apps: 00:03:02.675 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:03:02.675 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:03:02.675 test-pmd, test-regex, test-sad, test-security-perf, 00:03:02.675 00:03:02.675 Message: 00:03:02.675 ================= 00:03:02.675 Libraries Enabled 00:03:02.675 ================= 00:03:02.675 00:03:02.676 libs: 00:03:02.676 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:02.676 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:03:02.676 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:03:02.676 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:03:02.676 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:03:02.676 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:03:02.676 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:03:02.676 00:03:02.676 00:03:02.676 Message: 00:03:02.676 =============== 00:03:02.676 Drivers Enabled 00:03:02.676 =============== 00:03:02.676 00:03:02.676 common: 00:03:02.676 00:03:02.676 bus: 00:03:02.676 pci, vdev, 00:03:02.676 mempool: 00:03:02.676 ring, 00:03:02.676 dma: 00:03:02.676 00:03:02.676 net: 00:03:02.676 i40e, 00:03:02.676 raw: 00:03:02.676 00:03:02.676 crypto: 00:03:02.676 00:03:02.676 compress: 00:03:02.676 00:03:02.676 regex: 00:03:02.676 00:03:02.676 ml: 00:03:02.676 00:03:02.676 vdpa: 00:03:02.676 00:03:02.676 event: 00:03:02.676 00:03:02.676 baseband: 00:03:02.676 00:03:02.676 gpu: 00:03:02.676 00:03:02.676 00:03:02.676 Message: 00:03:02.676 ================= 00:03:02.676 Content Skipped 00:03:02.676 ================= 00:03:02.676 00:03:02.676 apps: 00:03:02.676 00:03:02.676 libs: 00:03:02.676 00:03:02.676 drivers: 00:03:02.676 common/cpt: not in enabled drivers build config 00:03:02.676 common/dpaax: not in enabled drivers build config 00:03:02.676 common/iavf: not in enabled drivers build config 00:03:02.676 common/idpf: not in enabled drivers build config 00:03:02.676 common/mvep: not in enabled drivers build config 00:03:02.676 common/octeontx: not in enabled drivers build config 00:03:02.676 bus/auxiliary: not in enabled drivers build config 00:03:02.676 bus/cdx: not in enabled drivers build config 00:03:02.676 bus/dpaa: not in enabled drivers build config 00:03:02.676 bus/fslmc: not in enabled drivers build config 00:03:02.676 bus/ifpga: not in enabled drivers build config 00:03:02.676 bus/platform: not in enabled drivers build config 00:03:02.676 bus/vmbus: not in enabled drivers build config 00:03:02.676 common/cnxk: not in enabled drivers build config 00:03:02.676 common/mlx5: not in enabled drivers build config 00:03:02.676 common/nfp: not in enabled drivers build config 00:03:02.676 common/qat: not in enabled drivers build config 00:03:02.676 common/sfc_efx: not in enabled drivers build config 00:03:02.676 mempool/bucket: not in enabled drivers build config 00:03:02.676 mempool/cnxk: not in enabled drivers build config 00:03:02.676 mempool/dpaa: not in enabled drivers build config 00:03:02.676 mempool/dpaa2: not in enabled drivers build config 00:03:02.676 mempool/octeontx: not in enabled drivers build config 00:03:02.676 mempool/stack: not in enabled drivers build config 00:03:02.676 dma/cnxk: not in enabled drivers build config 00:03:02.676 dma/dpaa: not in enabled drivers build config 00:03:02.676 dma/dpaa2: not in enabled drivers build config 00:03:02.676 dma/hisilicon: not in enabled drivers build config 00:03:02.676 dma/idxd: not in enabled drivers build config 00:03:02.676 dma/ioat: not in enabled drivers build config 00:03:02.676 dma/skeleton: not in enabled drivers build config 00:03:02.676 net/af_packet: not in enabled drivers build config 00:03:02.676 net/af_xdp: not in enabled drivers build config 00:03:02.676 net/ark: not in enabled drivers build config 00:03:02.676 net/atlantic: not in enabled drivers build config 00:03:02.676 net/avp: not in enabled drivers build config 00:03:02.676 net/axgbe: not in enabled drivers build config 00:03:02.676 net/bnx2x: not in enabled drivers build config 00:03:02.676 net/bnxt: not in enabled drivers build config 00:03:02.676 net/bonding: not in enabled drivers build config 00:03:02.676 net/cnxk: not in enabled drivers build config 00:03:02.676 net/cpfl: not in enabled drivers build config 00:03:02.676 net/cxgbe: not in enabled drivers build config 00:03:02.676 net/dpaa: not in enabled drivers build config 00:03:02.676 net/dpaa2: not in enabled drivers build config 00:03:02.676 net/e1000: not in enabled drivers build config 00:03:02.676 net/ena: not in enabled drivers build config 00:03:02.676 net/enetc: not in enabled drivers build config 00:03:02.676 net/enetfec: not in enabled drivers build config 00:03:02.676 net/enic: not in enabled drivers build config 00:03:02.676 net/failsafe: not in enabled drivers build config 00:03:02.676 net/fm10k: not in enabled drivers build config 00:03:02.676 net/gve: not in enabled drivers build config 00:03:02.676 net/hinic: not in enabled drivers build config 00:03:02.676 net/hns3: not in enabled drivers build config 00:03:02.676 net/iavf: not in enabled drivers build config 00:03:02.676 net/ice: not in enabled drivers build config 00:03:02.676 net/idpf: not in enabled drivers build config 00:03:02.676 net/igc: not in enabled drivers build config 00:03:02.676 net/ionic: not in enabled drivers build config 00:03:02.676 net/ipn3ke: not in enabled drivers build config 00:03:02.676 net/ixgbe: not in enabled drivers build config 00:03:02.676 net/mana: not in enabled drivers build config 00:03:02.676 net/memif: not in enabled drivers build config 00:03:02.676 net/mlx4: not in enabled drivers build config 00:03:02.676 net/mlx5: not in enabled drivers build config 00:03:02.676 net/mvneta: not in enabled drivers build config 00:03:02.676 net/mvpp2: not in enabled drivers build config 00:03:02.676 net/netvsc: not in enabled drivers build config 00:03:02.676 net/nfb: not in enabled drivers build config 00:03:02.676 net/nfp: not in enabled drivers build config 00:03:02.676 net/ngbe: not in enabled drivers build config 00:03:02.676 net/null: not in enabled drivers build config 00:03:02.676 net/octeontx: not in enabled drivers build config 00:03:02.676 net/octeon_ep: not in enabled drivers build config 00:03:02.676 net/pcap: not in enabled drivers build config 00:03:02.676 net/pfe: not in enabled drivers build config 00:03:02.676 net/qede: not in enabled drivers build config 00:03:02.676 net/ring: not in enabled drivers build config 00:03:02.676 net/sfc: not in enabled drivers build config 00:03:02.676 net/softnic: not in enabled drivers build config 00:03:02.676 net/tap: not in enabled drivers build config 00:03:02.676 net/thunderx: not in enabled drivers build config 00:03:02.676 net/txgbe: not in enabled drivers build config 00:03:02.676 net/vdev_netvsc: not in enabled drivers build config 00:03:02.676 net/vhost: not in enabled drivers build config 00:03:02.676 net/virtio: not in enabled drivers build config 00:03:02.676 net/vmxnet3: not in enabled drivers build config 00:03:02.676 raw/cnxk_bphy: not in enabled drivers build config 00:03:02.676 raw/cnxk_gpio: not in enabled drivers build config 00:03:02.676 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:02.676 raw/ifpga: not in enabled drivers build config 00:03:02.676 raw/ntb: not in enabled drivers build config 00:03:02.676 raw/skeleton: not in enabled drivers build config 00:03:02.676 crypto/armv8: not in enabled drivers build config 00:03:02.676 crypto/bcmfs: not in enabled drivers build config 00:03:02.676 crypto/caam_jr: not in enabled drivers build config 00:03:02.676 crypto/ccp: not in enabled drivers build config 00:03:02.676 crypto/cnxk: not in enabled drivers build config 00:03:02.676 crypto/dpaa_sec: not in enabled drivers build config 00:03:02.676 crypto/dpaa2_sec: not in enabled drivers build config 00:03:02.676 crypto/ipsec_mb: not in enabled drivers build config 00:03:02.676 crypto/mlx5: not in enabled drivers build config 00:03:02.676 crypto/mvsam: not in enabled drivers build config 00:03:02.676 crypto/nitrox: not in enabled drivers build config 00:03:02.676 crypto/null: not in enabled drivers build config 00:03:02.676 crypto/octeontx: not in enabled drivers build config 00:03:02.676 crypto/openssl: not in enabled drivers build config 00:03:02.676 crypto/scheduler: not in enabled drivers build config 00:03:02.676 crypto/uadk: not in enabled drivers build config 00:03:02.676 crypto/virtio: not in enabled drivers build config 00:03:02.676 compress/isal: not in enabled drivers build config 00:03:02.676 compress/mlx5: not in enabled drivers build config 00:03:02.676 compress/octeontx: not in enabled drivers build config 00:03:02.676 compress/zlib: not in enabled drivers build config 00:03:02.676 regex/mlx5: not in enabled drivers build config 00:03:02.676 regex/cn9k: not in enabled drivers build config 00:03:02.676 ml/cnxk: not in enabled drivers build config 00:03:02.676 vdpa/ifc: not in enabled drivers build config 00:03:02.676 vdpa/mlx5: not in enabled drivers build config 00:03:02.676 vdpa/nfp: not in enabled drivers build config 00:03:02.676 vdpa/sfc: not in enabled drivers build config 00:03:02.676 event/cnxk: not in enabled drivers build config 00:03:02.676 event/dlb2: not in enabled drivers build config 00:03:02.676 event/dpaa: not in enabled drivers build config 00:03:02.676 event/dpaa2: not in enabled drivers build config 00:03:02.676 event/dsw: not in enabled drivers build config 00:03:02.676 event/opdl: not in enabled drivers build config 00:03:02.677 event/skeleton: not in enabled drivers build config 00:03:02.677 event/sw: not in enabled drivers build config 00:03:02.677 event/octeontx: not in enabled drivers build config 00:03:02.677 baseband/acc: not in enabled drivers build config 00:03:02.677 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:02.677 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:02.677 baseband/la12xx: not in enabled drivers build config 00:03:02.677 baseband/null: not in enabled drivers build config 00:03:02.677 baseband/turbo_sw: not in enabled drivers build config 00:03:02.677 gpu/cuda: not in enabled drivers build config 00:03:02.677 00:03:02.677 00:03:02.677 Build targets in project: 215 00:03:02.677 00:03:02.677 DPDK 23.11.0 00:03:02.677 00:03:02.677 User defined options 00:03:02.677 libdir : lib 00:03:02.677 prefix : /home/vagrant/spdk_repo/dpdk/build 00:03:02.677 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:02.677 c_link_args : 00:03:02.677 enable_docs : false 00:03:02.677 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:02.677 enable_kmods : false 00:03:02.677 machine : native 00:03:02.677 tests : false 00:03:02.677 00:03:02.677 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:02.677 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:02.934 06:36:55 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:03:02.934 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:02.934 [1/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:02.934 [2/705] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:02.934 [3/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:02.934 [4/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:02.934 [5/705] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:02.934 [6/705] Linking static target lib/librte_kvargs.a 00:03:02.934 [7/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:03.192 [8/705] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:03.192 [9/705] Linking static target lib/librte_log.a 00:03:03.192 [10/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:03.192 [11/705] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.192 [12/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:03.192 [13/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:03.192 [14/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:03.450 [15/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:03.450 [16/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:03.450 [17/705] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.450 [18/705] Linking target lib/librte_log.so.24.0 00:03:03.450 [19/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:03.450 [20/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:03.450 [21/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:03.708 [22/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:03.708 [23/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:03.708 [24/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:03.708 [25/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:03.708 [26/705] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:03:03.708 [27/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:03.708 [28/705] Linking target lib/librte_kvargs.so.24.0 00:03:03.708 [29/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:03.708 [30/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:03.708 [31/705] Linking static target lib/librte_telemetry.a 00:03:03.708 [32/705] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:03:03.966 [33/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:03.966 [34/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:03.966 [35/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:03.966 [36/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:03.966 [37/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:03.966 [38/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:03.966 [39/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:03.966 [40/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:03.966 [41/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:04.224 [42/705] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.224 [43/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:04.224 [44/705] Linking target lib/librte_telemetry.so.24.0 00:03:04.224 [45/705] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:03:04.224 [46/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:04.482 [47/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:04.482 [48/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:04.482 [49/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:04.482 [50/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:04.482 [51/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:04.482 [52/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:04.482 [53/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:04.482 [54/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:04.740 [55/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:04.740 [56/705] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:04.740 [57/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:04.740 [58/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:04.740 [59/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:04.740 [60/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:04.740 [61/705] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:04.740 [62/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:04.740 [63/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:04.741 [64/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:04.741 [65/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:04.741 [66/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:04.999 [67/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:04.999 [68/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:04.999 [69/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:04.999 [70/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:04.999 [71/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:04.999 [72/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:04.999 [73/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:04.999 [74/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:04.999 [75/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:04.999 [76/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:05.258 [77/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:05.258 [78/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:05.258 [79/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:05.258 [80/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:05.258 [81/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:05.518 [82/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:05.518 [83/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:05.518 [84/705] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:05.518 [85/705] Linking static target lib/librte_ring.a 00:03:05.518 [86/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:05.518 [87/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:05.518 [88/705] Linking static target lib/librte_eal.a 00:03:05.518 [89/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:05.518 [90/705] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:05.776 [91/705] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.776 [92/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:05.776 [93/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:05.776 [94/705] Linking static target lib/librte_mempool.a 00:03:05.776 [95/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:05.776 [96/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:05.776 [97/705] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:05.776 [98/705] Linking static target lib/librte_rcu.a 00:03:05.776 [99/705] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:06.035 [100/705] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:06.035 [101/705] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:06.035 [102/705] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:06.035 [103/705] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.035 [104/705] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:06.292 [105/705] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:06.292 [106/705] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:06.292 [107/705] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.292 [108/705] Linking static target lib/librte_meter.a 00:03:06.292 [109/705] Linking static target lib/librte_net.a 00:03:06.292 [110/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:06.292 [111/705] Linking static target lib/librte_mbuf.a 00:03:06.292 [112/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:06.292 [113/705] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.292 [114/705] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.292 [115/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:06.292 [116/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:06.549 [117/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:06.549 [118/705] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.549 [119/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:06.807 [120/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:06.807 [121/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:06.807 [122/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:06.807 [123/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:07.066 [124/705] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:07.066 [125/705] Linking static target lib/librte_pci.a 00:03:07.066 [126/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:07.066 [127/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:07.066 [128/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:07.066 [129/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:07.066 [130/705] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.066 [131/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:07.066 [132/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:07.066 [133/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:07.324 [134/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:07.324 [135/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:07.324 [136/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:07.324 [137/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:07.324 [138/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:07.324 [139/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:07.324 [140/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:07.324 [141/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:07.324 [142/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:07.324 [143/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:07.324 [144/705] Linking static target lib/librte_cmdline.a 00:03:07.583 [145/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:07.583 [146/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:07.583 [147/705] Linking static target lib/librte_metrics.a 00:03:07.583 [148/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:07.583 [149/705] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:07.841 [150/705] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.841 [151/705] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:07.841 [152/705] Linking static target lib/librte_timer.a 00:03:07.841 [153/705] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:07.841 [154/705] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.099 [155/705] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.099 [156/705] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:08.099 [157/705] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:08.100 [158/705] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:08.100 [159/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:08.357 [160/705] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:08.358 [161/705] Linking static target lib/librte_bitratestats.a 00:03:08.616 [162/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:08.616 [163/705] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.616 [164/705] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:08.616 [165/705] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:08.616 [166/705] Linking static target lib/librte_bbdev.a 00:03:08.874 [167/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:08.874 [168/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:08.874 [169/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:09.133 [170/705] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:09.133 [171/705] Linking static target lib/librte_hash.a 00:03:09.133 [172/705] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:03:09.133 [173/705] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.133 [174/705] Linking static target lib/acl/libavx2_tmp.a 00:03:09.133 [175/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:09.133 [176/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:09.133 [177/705] Linking static target lib/librte_ethdev.a 00:03:09.133 [178/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:09.391 [179/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:09.391 [180/705] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.391 [181/705] Linking target lib/librte_eal.so.24.0 00:03:09.391 [182/705] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:09.391 [183/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:09.391 [184/705] Linking static target lib/librte_cfgfile.a 00:03:09.391 [185/705] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.391 [186/705] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:09.648 [187/705] Linking target lib/librte_ring.so.24.0 00:03:09.648 [188/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:09.648 [189/705] Linking target lib/librte_meter.so.24.0 00:03:09.648 [190/705] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:09.648 [191/705] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.648 [192/705] Linking target lib/librte_rcu.so.24.0 00:03:09.648 [193/705] Linking target lib/librte_mempool.so.24.0 00:03:09.648 [194/705] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:09.648 [195/705] Linking target lib/librte_pci.so.24.0 00:03:09.648 [196/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:09.648 [197/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:09.648 [198/705] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:09.648 [199/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:09.648 [200/705] Linking target lib/librte_timer.so.24.0 00:03:09.648 [201/705] Linking target lib/librte_cfgfile.so.24.0 00:03:09.648 [202/705] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:09.907 [203/705] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:09.907 [204/705] Linking target lib/librte_mbuf.so.24.0 00:03:09.907 [205/705] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:09.907 [206/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:09.907 [207/705] Linking static target lib/librte_compressdev.a 00:03:09.907 [208/705] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:09.907 [209/705] Linking target lib/librte_net.so.24.0 00:03:09.907 [210/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:03:09.907 [211/705] Linking static target lib/librte_acl.a 00:03:09.907 [212/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:09.907 [213/705] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:09.907 [214/705] Linking target lib/librte_bbdev.so.24.0 00:03:09.907 [215/705] Linking target lib/librte_cmdline.so.24.0 00:03:10.166 [216/705] Linking target lib/librte_hash.so.24.0 00:03:10.166 [217/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:10.166 [218/705] Linking static target lib/librte_bpf.a 00:03:10.166 [219/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:10.166 [220/705] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:10.166 [221/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:10.166 [222/705] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.166 [223/705] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.166 [224/705] Linking target lib/librte_compressdev.so.24.0 00:03:10.166 [225/705] Linking target lib/librte_acl.so.24.0 00:03:10.166 [226/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:10.423 [227/705] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.423 [228/705] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:03:10.423 [229/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:10.423 [230/705] Linking static target lib/librte_distributor.a 00:03:10.423 [231/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:10.423 [232/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:10.423 [233/705] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.681 [234/705] Linking target lib/librte_distributor.so.24.0 00:03:10.681 [235/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:10.681 [236/705] Linking static target lib/librte_dmadev.a 00:03:10.681 [237/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:10.940 [238/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:10.940 [239/705] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.940 [240/705] Linking target lib/librte_dmadev.so.24.0 00:03:10.940 [241/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:03:10.940 [242/705] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:11.206 [243/705] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:11.206 [244/705] Linking static target lib/librte_efd.a 00:03:11.206 [245/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:11.206 [246/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:11.206 [247/705] Linking static target lib/librte_cryptodev.a 00:03:11.206 [248/705] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.472 [249/705] Linking target lib/librte_efd.so.24.0 00:03:11.472 [250/705] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:03:11.472 [251/705] Linking static target lib/librte_dispatcher.a 00:03:11.472 [252/705] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:11.472 [253/705] Linking static target lib/librte_gpudev.a 00:03:11.472 [254/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:11.730 [255/705] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:11.730 [256/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:11.730 [257/705] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.730 [258/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:11.989 [259/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:03:11.989 [260/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:11.989 [261/705] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.989 [262/705] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.989 [263/705] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:11.989 [264/705] Linking target lib/librte_cryptodev.so.24.0 00:03:11.989 [265/705] Linking target lib/librte_gpudev.so.24.0 00:03:11.989 [266/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:11.989 [267/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:11.989 [268/705] Linking static target lib/librte_gro.a 00:03:11.989 [269/705] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:12.248 [270/705] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:12.248 [271/705] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:12.248 [272/705] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:12.248 [273/705] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.248 [274/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:12.248 [275/705] Linking static target lib/librte_eventdev.a 00:03:12.248 [276/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:12.248 [277/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:12.248 [278/705] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:12.248 [279/705] Linking static target lib/librte_gso.a 00:03:12.248 [280/705] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.506 [281/705] Linking target lib/librte_ethdev.so.24.0 00:03:12.506 [282/705] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.506 [283/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:12.506 [284/705] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:12.506 [285/705] Linking static target lib/librte_jobstats.a 00:03:12.506 [286/705] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:12.506 [287/705] Linking target lib/librte_metrics.so.24.0 00:03:12.506 [288/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:12.506 [289/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:12.506 [290/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:12.506 [291/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:12.506 [292/705] Linking target lib/librte_bpf.so.24.0 00:03:12.506 [293/705] Linking target lib/librte_gso.so.24.0 00:03:12.506 [294/705] Linking target lib/librte_gro.so.24.0 00:03:12.506 [295/705] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:03:12.766 [296/705] Linking target lib/librte_bitratestats.so.24.0 00:03:12.766 [297/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:12.766 [298/705] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:03:12.766 [299/705] Linking static target lib/librte_ip_frag.a 00:03:12.766 [300/705] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.766 [301/705] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:12.766 [302/705] Linking static target lib/librte_latencystats.a 00:03:12.766 [303/705] Linking target lib/librte_jobstats.so.24.0 00:03:12.766 [304/705] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.766 [305/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:13.025 [306/705] Linking target lib/librte_ip_frag.so.24.0 00:03:13.025 [307/705] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.025 [308/705] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:13.025 [309/705] Linking target lib/librte_latencystats.so.24.0 00:03:13.025 [310/705] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:03:13.025 [311/705] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:13.025 [312/705] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:13.025 [313/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:03:13.284 [314/705] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:13.284 [315/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:13.284 [316/705] Linking static target lib/librte_lpm.a 00:03:13.284 [317/705] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:13.284 [318/705] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:13.284 [319/705] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:13.284 [320/705] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:13.284 [321/705] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:13.284 [322/705] Linking static target lib/librte_pcapng.a 00:03:13.544 [323/705] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.544 [324/705] Linking target lib/librte_lpm.so.24.0 00:03:13.544 [325/705] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:13.544 [326/705] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:13.544 [327/705] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.544 [328/705] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:03:13.544 [329/705] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:13.544 [330/705] Linking target lib/librte_pcapng.so.24.0 00:03:13.544 [331/705] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:13.544 [332/705] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.804 [333/705] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:03:13.804 [334/705] Linking target lib/librte_eventdev.so.24.0 00:03:13.804 [335/705] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:13.804 [336/705] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:13.804 [337/705] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:03:13.804 [338/705] Linking static target lib/librte_power.a 00:03:13.804 [339/705] Linking target lib/librte_dispatcher.so.24.0 00:03:13.804 [340/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:13.804 [341/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:13.804 [342/705] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:13.804 [343/705] Linking static target lib/librte_regexdev.a 00:03:14.064 [344/705] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:14.064 [345/705] Linking static target lib/librte_rawdev.a 00:03:14.064 [346/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:14.064 [347/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:14.064 [348/705] Linking static target lib/librte_member.a 00:03:14.064 [349/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:14.064 [350/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:14.064 [351/705] Linking static target lib/librte_mldev.a 00:03:14.064 [352/705] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.064 [353/705] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.325 [354/705] Linking target lib/librte_power.so.24.0 00:03:14.325 [355/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:14.325 [356/705] Linking target lib/librte_member.so.24.0 00:03:14.325 [357/705] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:14.325 [358/705] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.325 [359/705] Linking static target lib/librte_reorder.a 00:03:14.325 [360/705] Linking target lib/librte_rawdev.so.24.0 00:03:14.325 [361/705] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:14.325 [362/705] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:14.325 [363/705] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.325 [364/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:14.325 [365/705] Linking static target lib/librte_rib.a 00:03:14.325 [366/705] Linking target lib/librte_regexdev.so.24.0 00:03:14.325 [367/705] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:14.585 [368/705] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.585 [369/705] Linking target lib/librte_reorder.so.24.0 00:03:14.585 [370/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:14.585 [371/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:14.585 [372/705] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:14.585 [373/705] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:03:14.585 [374/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:14.585 [375/705] Linking static target lib/librte_stack.a 00:03:14.585 [376/705] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:14.585 [377/705] Linking static target lib/librte_security.a 00:03:14.585 [378/705] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.845 [379/705] Linking target lib/librte_rib.so.24.0 00:03:14.845 [380/705] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.845 [381/705] Linking target lib/librte_stack.so.24.0 00:03:14.845 [382/705] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:14.845 [383/705] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:03:14.845 [384/705] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:14.845 [385/705] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.845 [386/705] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.845 [387/705] Linking target lib/librte_mldev.so.24.0 00:03:14.845 [388/705] Linking target lib/librte_security.so.24.0 00:03:15.105 [389/705] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:15.105 [390/705] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:03:15.105 [391/705] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:15.105 [392/705] Linking static target lib/librte_sched.a 00:03:15.367 [393/705] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:15.367 [394/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:15.367 [395/705] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.367 [396/705] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:15.367 [397/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:15.367 [398/705] Linking target lib/librte_sched.so.24.0 00:03:15.367 [399/705] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:03:15.367 [400/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:15.627 [401/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:15.627 [402/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:15.627 [403/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:15.627 [404/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:15.886 [405/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:15.886 [406/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:15.886 [407/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:16.150 [408/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:16.150 [409/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:16.150 [410/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:16.150 [411/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:16.150 [412/705] Linking static target lib/librte_ipsec.a 00:03:16.150 [413/705] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:16.150 [414/705] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.430 [415/705] Linking target lib/librte_ipsec.so.24.0 00:03:16.430 [416/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:16.430 [417/705] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:03:16.430 [418/705] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:16.726 [419/705] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:16.726 [420/705] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:16.726 [421/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:16.726 [422/705] Linking static target lib/librte_fib.a 00:03:16.726 [423/705] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:16.726 [424/705] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:16.726 [425/705] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.726 [426/705] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:16.987 [427/705] Linking target lib/librte_fib.so.24.0 00:03:16.987 [428/705] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:16.987 [429/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:16.987 [430/705] Linking static target lib/librte_pdcp.a 00:03:16.987 [431/705] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.247 [432/705] Linking target lib/librte_pdcp.so.24.0 00:03:17.247 [433/705] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:17.247 [434/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:17.247 [435/705] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:17.247 [436/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:17.508 [437/705] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:17.508 [438/705] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:17.508 [439/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:17.768 [440/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:17.768 [441/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:17.768 [442/705] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:17.768 [443/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:17.768 [444/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:17.768 [445/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:17.768 [446/705] Linking static target lib/librte_port.a 00:03:17.768 [447/705] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:17.768 [448/705] Linking static target lib/librte_pdump.a 00:03:18.028 [449/705] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:18.028 [450/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:18.029 [451/705] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.029 [452/705] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:18.029 [453/705] Linking target lib/librte_pdump.so.24.0 00:03:18.029 [454/705] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.029 [455/705] Linking target lib/librte_port.so.24.0 00:03:18.289 [456/705] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:03:18.289 [457/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:18.289 [458/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:18.289 [459/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:18.289 [460/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:18.289 [461/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:18.550 [462/705] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:18.550 [463/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:18.550 [464/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:18.550 [465/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:18.811 [466/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:18.811 [467/705] Linking static target lib/librte_table.a 00:03:18.811 [468/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:19.072 [469/705] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:19.072 [470/705] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:19.072 [471/705] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.072 [472/705] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:19.072 [473/705] Linking target lib/librte_table.so.24.0 00:03:19.072 [474/705] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:19.332 [475/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:19.332 [476/705] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:19.332 [477/705] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:19.332 [478/705] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:19.332 [479/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:19.591 [480/705] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:19.591 [481/705] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:19.850 [482/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:19.850 [483/705] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:19.850 [484/705] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:19.850 [485/705] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:19.850 [486/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:20.108 [487/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:20.108 [488/705] Linking static target lib/librte_graph.a 00:03:20.108 [489/705] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:20.367 [490/705] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:20.367 [491/705] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:20.367 [492/705] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:20.367 [493/705] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:20.367 [494/705] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:20.628 [495/705] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:20.628 [496/705] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.628 [497/705] Linking target lib/librte_graph.so.24.0 00:03:20.628 [498/705] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:20.628 [499/705] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:20.628 [500/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:20.628 [501/705] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:20.628 [502/705] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:20.888 [503/705] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:20.888 [504/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:20.888 [505/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:20.888 [506/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:20.888 [507/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:21.148 [508/705] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:21.148 [509/705] Linking static target lib/librte_node.a 00:03:21.148 [510/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:21.148 [511/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:21.148 [512/705] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:21.148 [513/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:21.148 [514/705] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:21.409 [515/705] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.409 [516/705] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:21.409 [517/705] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:21.409 [518/705] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:21.409 [519/705] Linking static target drivers/librte_bus_vdev.a 00:03:21.409 [520/705] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:21.409 [521/705] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:21.409 [522/705] Linking static target drivers/librte_bus_pci.a 00:03:21.409 [523/705] Linking target lib/librte_node.so.24.0 00:03:21.409 [524/705] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:21.409 [525/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:21.409 [526/705] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.669 [527/705] Linking target drivers/librte_bus_vdev.so.24.0 00:03:21.669 [528/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:21.669 [529/705] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:21.669 [530/705] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.669 [531/705] Linking target drivers/librte_bus_pci.so.24.0 00:03:21.669 [532/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:21.669 [533/705] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:21.669 [534/705] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:21.670 [535/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:21.670 [536/705] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:21.930 [537/705] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:21.930 [538/705] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:21.930 [539/705] Linking static target drivers/librte_mempool_ring.a 00:03:21.930 [540/705] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:21.930 [541/705] Linking target drivers/librte_mempool_ring.so.24.0 00:03:22.191 [542/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:22.191 [543/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:22.452 [544/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:22.452 [545/705] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:22.711 [546/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:22.970 [547/705] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:22.970 [548/705] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:23.228 [549/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:23.228 [550/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:23.228 [551/705] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:23.228 [552/705] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:23.487 [553/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:23.487 [554/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:23.487 [555/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:23.744 [556/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:23.744 [557/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:23.744 [558/705] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:24.003 [559/705] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:24.003 [560/705] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:24.003 [561/705] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:24.261 [562/705] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:24.261 [563/705] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:24.261 [564/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:24.261 [565/705] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:24.520 [566/705] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:24.520 [567/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:24.520 [568/705] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:24.520 [569/705] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:24.520 [570/705] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:24.777 [571/705] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:24.778 [572/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:24.778 [573/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:24.778 [574/705] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:24.778 [575/705] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:24.778 [576/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:25.035 [577/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:25.035 [578/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:25.035 [579/705] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:25.035 [580/705] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:25.035 [581/705] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:25.035 [582/705] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:25.035 [583/705] Linking static target drivers/librte_net_i40e.a 00:03:25.293 [584/705] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:25.293 [585/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:25.293 [586/705] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.551 [587/705] Linking target drivers/librte_net_i40e.so.24.0 00:03:25.551 [588/705] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:25.551 [589/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:25.551 [590/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:25.809 [591/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:25.809 [592/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:25.809 [593/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:25.809 [594/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:26.067 [595/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:26.067 [596/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:26.067 [597/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:26.067 [598/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:26.067 [599/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:26.325 [600/705] Linking static target lib/librte_vhost.a 00:03:26.325 [601/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:26.325 [602/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:26.325 [603/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:26.325 [604/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:26.325 [605/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:26.582 [606/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:26.582 [607/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:26.582 [608/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:26.582 [609/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:26.582 [610/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:26.582 [611/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:26.840 [612/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:26.840 [613/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:27.098 [614/705] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.098 [615/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:27.098 [616/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:27.098 [617/705] Linking target lib/librte_vhost.so.24.0 00:03:27.098 [618/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:27.664 [619/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:27.664 [620/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:27.664 [621/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:27.664 [622/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:27.664 [623/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:27.664 [624/705] Linking static target lib/librte_pipeline.a 00:03:27.664 [625/705] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:27.664 [626/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:27.922 [627/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:27.922 [628/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:27.922 [629/705] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:27.922 [630/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:27.922 [631/705] Linking target app/dpdk-dumpcap 00:03:27.922 [632/705] Linking target app/dpdk-graph 00:03:28.180 [633/705] Linking target app/dpdk-pdump 00:03:28.180 [634/705] Linking target app/dpdk-proc-info 00:03:28.180 [635/705] Linking target app/dpdk-test-acl 00:03:28.180 [636/705] Linking target app/dpdk-test-crypto-perf 00:03:28.180 [637/705] Linking target app/dpdk-test-cmdline 00:03:28.438 [638/705] Linking target app/dpdk-test-compress-perf 00:03:28.438 [639/705] Linking target app/dpdk-test-dma-perf 00:03:28.438 [640/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:28.438 [641/705] Linking target app/dpdk-test-fib 00:03:28.438 [642/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:28.438 [643/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:28.438 [644/705] Linking target app/dpdk-test-gpudev 00:03:28.438 [645/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:28.696 [646/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:28.696 [647/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:28.696 [648/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:28.696 [649/705] Linking target app/dpdk-test-flow-perf 00:03:28.696 [650/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:28.696 [651/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:28.696 [652/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:28.955 [653/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:28.955 [654/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:28.955 [655/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:28.955 [656/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:29.213 [657/705] Linking target app/dpdk-test-eventdev 00:03:29.213 [658/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:29.213 [659/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:29.213 [660/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:29.213 [661/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:29.213 [662/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:29.471 [663/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:29.471 [664/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:29.471 [665/705] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:29.471 [666/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:29.729 [667/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:29.729 [668/705] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.729 [669/705] Linking target lib/librte_pipeline.so.24.0 00:03:29.729 [670/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:29.729 [671/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:29.729 [672/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:29.729 [673/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:29.987 [674/705] Linking target app/dpdk-test-bbdev 00:03:29.987 [675/705] Linking target app/dpdk-test-mldev 00:03:30.246 [676/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:30.246 [677/705] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:30.246 [678/705] Linking target app/dpdk-test-pipeline 00:03:30.246 [679/705] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:30.246 [680/705] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:30.504 [681/705] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:30.504 [682/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:30.504 [683/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:30.504 [684/705] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:30.762 [685/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:30.762 [686/705] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:30.762 [687/705] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:30.762 [688/705] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:31.025 [689/705] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:31.025 [690/705] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:31.025 [691/705] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:31.288 [692/705] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:31.288 [693/705] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:31.288 [694/705] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:31.547 [695/705] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:31.547 [696/705] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:31.547 [697/705] Linking target app/dpdk-test-sad 00:03:31.547 [698/705] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:31.808 [699/705] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:31.808 [700/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:31.808 [701/705] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:31.808 [702/705] Linking target app/dpdk-test-regex 00:03:32.068 [703/705] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:32.068 [704/705] Linking target app/dpdk-test-security-perf 00:03:32.326 [705/705] Linking target app/dpdk-testpmd 00:03:32.326 06:37:25 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:32.326 06:37:25 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:32.326 06:37:25 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:32.326 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:32.326 [0/1] Installing files. 00:03:32.592 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.592 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:32.593 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:32.594 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.595 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:32.596 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:32.597 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:32.597 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:32.597 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:32.597 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:32.597 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.597 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.598 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.598 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.598 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.598 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.598 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.861 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.861 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.861 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.861 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:32.861 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.861 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:32.861 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.861 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:32.861 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.861 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:32.861 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.861 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.861 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.861 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.861 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.861 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.861 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.861 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.861 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.861 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.861 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.861 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.861 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.861 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.861 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.861 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.861 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.861 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.861 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.861 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.861 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.861 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.861 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.861 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.862 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.863 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.864 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.865 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:32.865 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:32.865 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:32.865 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:32.865 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:32.865 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:32.865 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:32.865 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:32.865 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:32.865 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:32.865 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:32.865 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:32.865 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:32.865 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:32.865 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:32.865 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:32.865 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:32.865 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:32.865 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:32.865 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:32.865 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:32.865 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:32.865 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:32.865 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:32.865 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:32.865 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:32.865 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:32.865 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:32.865 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:32.865 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:32.865 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:32.865 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:32.865 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:32.865 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:32.865 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:32.865 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:32.865 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:32.865 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:32.865 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:32.865 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:32.865 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:32.865 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:32.865 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:32.865 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:32.865 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:32.865 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:32.865 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:32.865 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:32.865 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:32.865 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:32.865 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:32.865 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:32.865 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:32.865 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:32.865 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:32.865 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:32.865 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:32.865 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:32.865 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:32.865 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:32.865 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:32.865 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:32.865 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:32.865 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:32.865 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:32.865 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:32.865 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:32.865 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:32.865 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:32.865 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:32.865 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:32.865 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:32.865 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:32.865 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:32.865 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:32.865 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:32.865 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:32.865 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:32.865 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:32.865 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:32.865 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:32.865 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:32.865 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:32.865 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:32.865 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:32.865 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:32.865 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:32.865 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:32.865 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:32.865 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:32.865 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:32.865 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:32.865 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:32.865 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:32.865 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:32.865 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:32.865 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:32.865 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:32.865 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:32.865 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:32.865 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:32.865 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:32.865 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:32.865 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:32.865 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:32.865 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:32.865 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:32.865 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:32.866 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:32.866 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:32.866 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:32.866 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:32.866 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:32.866 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:32.866 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:32.866 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:32.866 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:32.866 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:32.866 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:32.866 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:32.866 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:32.866 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:32.866 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:32.866 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:32.866 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:32.866 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:32.866 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:32.866 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:32.866 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:32.866 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:32.866 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:32.866 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:32.866 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:32.866 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:32.866 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:32.866 ************************************ 00:03:32.866 END TEST build_native_dpdk 00:03:32.866 ************************************ 00:03:32.866 06:37:25 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:32.866 06:37:25 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:32.866 00:03:32.866 real 0m35.591s 00:03:32.866 user 4m9.865s 00:03:32.866 sys 0m35.467s 00:03:32.866 06:37:25 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:32.866 06:37:25 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:33.125 06:37:25 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:33.125 06:37:25 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:33.125 06:37:25 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:33.125 06:37:25 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:33.125 06:37:25 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:33.125 06:37:25 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:33.125 06:37:25 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:33.125 06:37:25 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:33.125 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:33.125 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.125 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:33.125 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:33.384 Using 'verbs' RDMA provider 00:03:44.313 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:54.337 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:54.337 Creating mk/config.mk...done. 00:03:54.337 Creating mk/cc.flags.mk...done. 00:03:54.337 Type 'make' to build. 00:03:54.337 06:37:47 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:54.337 06:37:47 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:54.337 06:37:47 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:54.337 06:37:47 -- common/autotest_common.sh@10 -- $ set +x 00:03:54.337 ************************************ 00:03:54.337 START TEST make 00:03:54.337 ************************************ 00:03:54.337 06:37:47 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:54.596 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:54.596 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:54.596 meson setup builddir \ 00:03:54.596 -Dwith-libaio=enabled \ 00:03:54.596 -Dwith-liburing=enabled \ 00:03:54.596 -Dwith-libvfn=disabled \ 00:03:54.596 -Dwith-spdk=disabled \ 00:03:54.596 -Dexamples=false \ 00:03:54.596 -Dtests=false \ 00:03:54.596 -Dtools=false && \ 00:03:54.596 meson compile -C builddir && \ 00:03:54.596 cd -) 00:03:54.596 make[1]: Nothing to be done for 'all'. 00:03:56.499 The Meson build system 00:03:56.499 Version: 1.5.0 00:03:56.499 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:56.499 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:56.499 Build type: native build 00:03:56.499 Project name: xnvme 00:03:56.499 Project version: 0.7.5 00:03:56.499 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:56.499 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:56.499 Host machine cpu family: x86_64 00:03:56.499 Host machine cpu: x86_64 00:03:56.499 Message: host_machine.system: linux 00:03:56.499 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:56.499 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:56.499 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:56.499 Run-time dependency threads found: YES 00:03:56.499 Has header "setupapi.h" : NO 00:03:56.499 Has header "linux/blkzoned.h" : YES 00:03:56.499 Has header "linux/blkzoned.h" : YES (cached) 00:03:56.499 Has header "libaio.h" : YES 00:03:56.499 Library aio found: YES 00:03:56.499 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:56.499 Run-time dependency liburing found: YES 2.2 00:03:56.499 Dependency libvfn skipped: feature with-libvfn disabled 00:03:56.499 Found CMake: /usr/bin/cmake (3.27.7) 00:03:56.499 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:56.499 Subproject spdk : skipped: feature with-spdk disabled 00:03:56.499 Run-time dependency appleframeworks found: NO (tried framework) 00:03:56.499 Run-time dependency appleframeworks found: NO (tried framework) 00:03:56.499 Library rt found: YES 00:03:56.499 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:56.499 Configuring xnvme_config.h using configuration 00:03:56.499 Configuring xnvme.spec using configuration 00:03:56.499 Run-time dependency bash-completion found: YES 2.11 00:03:56.499 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:56.499 Program cp found: YES (/usr/bin/cp) 00:03:56.499 Build targets in project: 3 00:03:56.499 00:03:56.499 xnvme 0.7.5 00:03:56.499 00:03:56.499 Subprojects 00:03:56.499 spdk : NO Feature 'with-spdk' disabled 00:03:56.499 00:03:56.499 User defined options 00:03:56.499 examples : false 00:03:56.499 tests : false 00:03:56.499 tools : false 00:03:56.499 with-libaio : enabled 00:03:56.499 with-liburing: enabled 00:03:56.499 with-libvfn : disabled 00:03:56.499 with-spdk : disabled 00:03:56.499 00:03:56.499 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:56.758 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:56.758 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:56.758 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:56.758 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:56.758 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:56.758 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:56.758 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:56.758 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:57.017 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:57.017 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:57.017 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:57.017 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:57.017 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:57.017 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:57.017 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:57.017 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:57.017 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:57.017 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:57.017 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:57.017 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:57.017 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:57.017 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:57.017 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:57.017 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:57.017 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:57.017 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:57.017 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:57.017 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:57.017 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:57.017 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:57.017 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:57.017 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:57.017 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:57.017 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:57.017 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:57.017 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:57.275 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:57.275 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:57.275 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:57.275 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:57.275 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:57.275 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:57.275 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:57.275 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:57.275 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:57.275 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:57.275 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:57.275 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:57.275 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:57.275 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:57.275 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:57.275 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:57.275 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:57.275 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:57.275 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:57.275 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:57.275 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:57.275 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:57.275 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:57.275 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:57.275 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:57.275 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:57.275 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:57.275 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:57.534 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:57.534 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:57.534 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:57.534 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:57.534 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:57.534 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:57.534 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:57.534 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:57.534 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:57.534 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:57.793 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:57.793 [75/76] Linking static target lib/libxnvme.a 00:03:58.051 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:58.051 INFO: autodetecting backend as ninja 00:03:58.051 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:58.051 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:30.134 CC lib/ut/ut.o 00:04:30.134 CC lib/ut_mock/mock.o 00:04:30.134 CC lib/log/log_flags.o 00:04:30.134 CC lib/log/log_deprecated.o 00:04:30.134 CC lib/log/log.o 00:04:30.134 LIB libspdk_ut.a 00:04:30.134 LIB libspdk_ut_mock.a 00:04:30.134 SO libspdk_ut.so.2.0 00:04:30.134 SO libspdk_ut_mock.so.6.0 00:04:30.134 LIB libspdk_log.a 00:04:30.134 SYMLINK libspdk_ut.so 00:04:30.134 SYMLINK libspdk_ut_mock.so 00:04:30.134 SO libspdk_log.so.7.1 00:04:30.134 SYMLINK libspdk_log.so 00:04:30.134 CC lib/dma/dma.o 00:04:30.134 CXX lib/trace_parser/trace.o 00:04:30.134 CC lib/ioat/ioat.o 00:04:30.134 CC lib/util/bit_array.o 00:04:30.134 CC lib/util/base64.o 00:04:30.134 CC lib/util/cpuset.o 00:04:30.134 CC lib/util/crc32c.o 00:04:30.134 CC lib/util/crc32.o 00:04:30.134 CC lib/util/crc16.o 00:04:30.134 CC lib/vfio_user/host/vfio_user_pci.o 00:04:30.134 CC lib/util/crc32_ieee.o 00:04:30.134 CC lib/vfio_user/host/vfio_user.o 00:04:30.134 CC lib/util/crc64.o 00:04:30.134 CC lib/util/dif.o 00:04:30.134 CC lib/util/fd.o 00:04:30.134 LIB libspdk_dma.a 00:04:30.134 SO libspdk_dma.so.5.0 00:04:30.134 CC lib/util/fd_group.o 00:04:30.134 LIB libspdk_ioat.a 00:04:30.134 CC lib/util/file.o 00:04:30.134 CC lib/util/hexlify.o 00:04:30.134 SO libspdk_ioat.so.7.0 00:04:30.134 SYMLINK libspdk_dma.so 00:04:30.134 CC lib/util/iov.o 00:04:30.134 CC lib/util/math.o 00:04:30.134 SYMLINK libspdk_ioat.so 00:04:30.134 CC lib/util/net.o 00:04:30.134 CC lib/util/pipe.o 00:04:30.135 CC lib/util/strerror_tls.o 00:04:30.135 LIB libspdk_vfio_user.a 00:04:30.135 SO libspdk_vfio_user.so.5.0 00:04:30.135 CC lib/util/string.o 00:04:30.135 CC lib/util/uuid.o 00:04:30.135 SYMLINK libspdk_vfio_user.so 00:04:30.135 CC lib/util/xor.o 00:04:30.135 CC lib/util/zipf.o 00:04:30.135 CC lib/util/md5.o 00:04:30.135 LIB libspdk_util.a 00:04:30.135 SO libspdk_util.so.10.1 00:04:30.135 LIB libspdk_trace_parser.a 00:04:30.135 SYMLINK libspdk_util.so 00:04:30.135 SO libspdk_trace_parser.so.6.0 00:04:30.135 SYMLINK libspdk_trace_parser.so 00:04:30.135 CC lib/vmd/vmd.o 00:04:30.135 CC lib/vmd/led.o 00:04:30.135 CC lib/env_dpdk/memory.o 00:04:30.135 CC lib/env_dpdk/env.o 00:04:30.135 CC lib/env_dpdk/pci.o 00:04:30.135 CC lib/json/json_parse.o 00:04:30.135 CC lib/json/json_util.o 00:04:30.135 CC lib/conf/conf.o 00:04:30.135 CC lib/idxd/idxd.o 00:04:30.135 CC lib/rdma_utils/rdma_utils.o 00:04:30.135 CC lib/idxd/idxd_user.o 00:04:30.135 CC lib/json/json_write.o 00:04:30.135 LIB libspdk_conf.a 00:04:30.135 CC lib/env_dpdk/init.o 00:04:30.135 SO libspdk_conf.so.6.0 00:04:30.135 LIB libspdk_rdma_utils.a 00:04:30.135 SYMLINK libspdk_conf.so 00:04:30.135 SO libspdk_rdma_utils.so.1.0 00:04:30.135 CC lib/env_dpdk/threads.o 00:04:30.135 SYMLINK libspdk_rdma_utils.so 00:04:30.135 CC lib/env_dpdk/pci_ioat.o 00:04:30.135 CC lib/env_dpdk/pci_virtio.o 00:04:30.135 CC lib/idxd/idxd_kernel.o 00:04:30.135 LIB libspdk_json.a 00:04:30.135 SO libspdk_json.so.6.0 00:04:30.135 SYMLINK libspdk_json.so 00:04:30.135 CC lib/env_dpdk/pci_vmd.o 00:04:30.135 CC lib/env_dpdk/pci_idxd.o 00:04:30.135 CC lib/env_dpdk/pci_event.o 00:04:30.135 CC lib/env_dpdk/sigbus_handler.o 00:04:30.135 CC lib/env_dpdk/pci_dpdk.o 00:04:30.135 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:30.135 CC lib/rdma_provider/common.o 00:04:30.135 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:30.135 LIB libspdk_idxd.a 00:04:30.135 CC lib/jsonrpc/jsonrpc_server.o 00:04:30.135 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:30.135 SO libspdk_idxd.so.12.1 00:04:30.135 LIB libspdk_vmd.a 00:04:30.135 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:30.135 SO libspdk_vmd.so.6.0 00:04:30.135 CC lib/jsonrpc/jsonrpc_client.o 00:04:30.135 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:30.135 SYMLINK libspdk_idxd.so 00:04:30.135 SYMLINK libspdk_vmd.so 00:04:30.135 LIB libspdk_rdma_provider.a 00:04:30.135 SO libspdk_rdma_provider.so.7.0 00:04:30.135 SYMLINK libspdk_rdma_provider.so 00:04:30.135 LIB libspdk_jsonrpc.a 00:04:30.135 SO libspdk_jsonrpc.so.6.0 00:04:30.135 SYMLINK libspdk_jsonrpc.so 00:04:30.135 CC lib/rpc/rpc.o 00:04:30.135 LIB libspdk_env_dpdk.a 00:04:30.135 LIB libspdk_rpc.a 00:04:30.135 SO libspdk_env_dpdk.so.15.1 00:04:30.135 SO libspdk_rpc.so.6.0 00:04:30.135 SYMLINK libspdk_rpc.so 00:04:30.135 SYMLINK libspdk_env_dpdk.so 00:04:30.135 CC lib/trace/trace_flags.o 00:04:30.135 CC lib/trace/trace.o 00:04:30.135 CC lib/trace/trace_rpc.o 00:04:30.135 CC lib/keyring/keyring.o 00:04:30.135 CC lib/notify/notify_rpc.o 00:04:30.135 CC lib/keyring/keyring_rpc.o 00:04:30.135 CC lib/notify/notify.o 00:04:30.135 LIB libspdk_notify.a 00:04:30.135 SO libspdk_notify.so.6.0 00:04:30.135 LIB libspdk_keyring.a 00:04:30.135 SYMLINK libspdk_notify.so 00:04:30.135 LIB libspdk_trace.a 00:04:30.135 SO libspdk_keyring.so.2.0 00:04:30.135 SO libspdk_trace.so.11.0 00:04:30.135 SYMLINK libspdk_keyring.so 00:04:30.135 SYMLINK libspdk_trace.so 00:04:30.135 CC lib/sock/sock.o 00:04:30.135 CC lib/sock/sock_rpc.o 00:04:30.135 CC lib/thread/thread.o 00:04:30.135 CC lib/thread/iobuf.o 00:04:30.394 LIB libspdk_sock.a 00:04:30.394 SO libspdk_sock.so.10.0 00:04:30.394 SYMLINK libspdk_sock.so 00:04:30.654 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:30.654 CC lib/nvme/nvme_ctrlr.o 00:04:30.654 CC lib/nvme/nvme_ns_cmd.o 00:04:30.654 CC lib/nvme/nvme_ns.o 00:04:30.654 CC lib/nvme/nvme_pcie.o 00:04:30.654 CC lib/nvme/nvme.o 00:04:30.654 CC lib/nvme/nvme_pcie_common.o 00:04:30.654 CC lib/nvme/nvme_fabric.o 00:04:30.654 CC lib/nvme/nvme_qpair.o 00:04:31.226 CC lib/nvme/nvme_quirks.o 00:04:31.486 CC lib/nvme/nvme_transport.o 00:04:31.486 CC lib/nvme/nvme_discovery.o 00:04:31.486 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:31.486 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:31.486 CC lib/nvme/nvme_tcp.o 00:04:31.486 CC lib/nvme/nvme_opal.o 00:04:31.486 LIB libspdk_thread.a 00:04:31.486 CC lib/nvme/nvme_io_msg.o 00:04:31.486 SO libspdk_thread.so.11.0 00:04:31.747 SYMLINK libspdk_thread.so 00:04:31.747 CC lib/nvme/nvme_poll_group.o 00:04:31.747 CC lib/nvme/nvme_zns.o 00:04:32.008 CC lib/nvme/nvme_stubs.o 00:04:32.008 CC lib/nvme/nvme_auth.o 00:04:32.008 CC lib/nvme/nvme_cuse.o 00:04:32.008 CC lib/nvme/nvme_rdma.o 00:04:32.008 CC lib/accel/accel.o 00:04:32.008 CC lib/blob/blobstore.o 00:04:32.269 CC lib/init/json_config.o 00:04:32.269 CC lib/accel/accel_rpc.o 00:04:32.530 CC lib/accel/accel_sw.o 00:04:32.530 CC lib/init/subsystem.o 00:04:32.530 CC lib/virtio/virtio.o 00:04:32.530 CC lib/init/subsystem_rpc.o 00:04:32.791 CC lib/virtio/virtio_vhost_user.o 00:04:32.791 CC lib/init/rpc.o 00:04:32.791 CC lib/blob/request.o 00:04:32.791 CC lib/virtio/virtio_vfio_user.o 00:04:32.791 CC lib/blob/zeroes.o 00:04:32.791 CC lib/blob/blob_bs_dev.o 00:04:32.791 CC lib/fsdev/fsdev.o 00:04:32.791 LIB libspdk_init.a 00:04:33.053 SO libspdk_init.so.6.0 00:04:33.053 SYMLINK libspdk_init.so 00:04:33.053 CC lib/virtio/virtio_pci.o 00:04:33.053 CC lib/fsdev/fsdev_io.o 00:04:33.053 CC lib/fsdev/fsdev_rpc.o 00:04:33.053 CC lib/event/reactor.o 00:04:33.053 CC lib/event/app.o 00:04:33.053 CC lib/event/log_rpc.o 00:04:33.053 LIB libspdk_accel.a 00:04:33.053 CC lib/event/app_rpc.o 00:04:33.315 SO libspdk_accel.so.16.0 00:04:33.315 SYMLINK libspdk_accel.so 00:04:33.315 CC lib/event/scheduler_static.o 00:04:33.315 LIB libspdk_virtio.a 00:04:33.315 SO libspdk_virtio.so.7.0 00:04:33.315 SYMLINK libspdk_virtio.so 00:04:33.315 CC lib/bdev/bdev.o 00:04:33.315 CC lib/bdev/bdev_zone.o 00:04:33.315 CC lib/bdev/scsi_nvme.o 00:04:33.315 CC lib/bdev/part.o 00:04:33.315 CC lib/bdev/bdev_rpc.o 00:04:33.315 LIB libspdk_nvme.a 00:04:33.576 LIB libspdk_fsdev.a 00:04:33.576 SO libspdk_fsdev.so.2.0 00:04:33.576 SYMLINK libspdk_fsdev.so 00:04:33.576 SO libspdk_nvme.so.15.0 00:04:33.576 LIB libspdk_event.a 00:04:33.576 SO libspdk_event.so.14.0 00:04:33.837 SYMLINK libspdk_event.so 00:04:33.837 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:33.837 SYMLINK libspdk_nvme.so 00:04:34.410 LIB libspdk_fuse_dispatcher.a 00:04:34.410 SO libspdk_fuse_dispatcher.so.1.0 00:04:34.410 SYMLINK libspdk_fuse_dispatcher.so 00:04:34.983 LIB libspdk_blob.a 00:04:34.983 SO libspdk_blob.so.11.0 00:04:35.244 SYMLINK libspdk_blob.so 00:04:35.505 CC lib/blobfs/blobfs.o 00:04:35.505 CC lib/blobfs/tree.o 00:04:35.505 CC lib/lvol/lvol.o 00:04:35.505 LIB libspdk_bdev.a 00:04:35.505 SO libspdk_bdev.so.17.0 00:04:35.766 SYMLINK libspdk_bdev.so 00:04:35.766 CC lib/ftl/ftl_core.o 00:04:35.766 CC lib/ftl/ftl_init.o 00:04:35.766 CC lib/ftl/ftl_layout.o 00:04:35.766 CC lib/ftl/ftl_debug.o 00:04:35.766 CC lib/scsi/dev.o 00:04:35.766 CC lib/ublk/ublk.o 00:04:35.766 CC lib/nvmf/ctrlr.o 00:04:35.766 CC lib/nbd/nbd.o 00:04:36.027 CC lib/scsi/lun.o 00:04:36.027 CC lib/scsi/port.o 00:04:36.027 CC lib/scsi/scsi.o 00:04:36.027 LIB libspdk_lvol.a 00:04:36.027 CC lib/nvmf/ctrlr_discovery.o 00:04:36.027 CC lib/nvmf/ctrlr_bdev.o 00:04:36.027 SO libspdk_lvol.so.10.0 00:04:36.027 CC lib/nvmf/subsystem.o 00:04:36.027 CC lib/ftl/ftl_io.o 00:04:36.027 CC lib/scsi/scsi_bdev.o 00:04:36.300 SYMLINK libspdk_lvol.so 00:04:36.301 CC lib/nvmf/nvmf.o 00:04:36.301 CC lib/nbd/nbd_rpc.o 00:04:36.301 LIB libspdk_blobfs.a 00:04:36.301 SO libspdk_blobfs.so.10.0 00:04:36.301 SYMLINK libspdk_blobfs.so 00:04:36.301 CC lib/nvmf/nvmf_rpc.o 00:04:36.301 LIB libspdk_nbd.a 00:04:36.301 SO libspdk_nbd.so.7.0 00:04:36.301 CC lib/ftl/ftl_sb.o 00:04:36.301 CC lib/ublk/ublk_rpc.o 00:04:36.301 SYMLINK libspdk_nbd.so 00:04:36.301 CC lib/ftl/ftl_l2p.o 00:04:36.565 CC lib/scsi/scsi_pr.o 00:04:36.565 CC lib/scsi/scsi_rpc.o 00:04:36.565 CC lib/scsi/task.o 00:04:36.565 LIB libspdk_ublk.a 00:04:36.565 SO libspdk_ublk.so.3.0 00:04:36.565 CC lib/ftl/ftl_l2p_flat.o 00:04:36.565 SYMLINK libspdk_ublk.so 00:04:36.565 CC lib/ftl/ftl_nv_cache.o 00:04:36.565 CC lib/nvmf/transport.o 00:04:36.826 CC lib/nvmf/tcp.o 00:04:36.826 LIB libspdk_scsi.a 00:04:36.826 CC lib/nvmf/stubs.o 00:04:36.826 CC lib/ftl/ftl_band.o 00:04:36.826 SO libspdk_scsi.so.9.0 00:04:36.826 SYMLINK libspdk_scsi.so 00:04:36.826 CC lib/nvmf/mdns_server.o 00:04:37.086 CC lib/nvmf/rdma.o 00:04:37.086 CC lib/nvmf/auth.o 00:04:37.086 CC lib/ftl/ftl_band_ops.o 00:04:37.086 CC lib/ftl/ftl_writer.o 00:04:37.347 CC lib/ftl/ftl_rq.o 00:04:37.347 CC lib/ftl/ftl_reloc.o 00:04:37.347 CC lib/ftl/ftl_l2p_cache.o 00:04:37.347 CC lib/ftl/ftl_p2l.o 00:04:37.347 CC lib/ftl/ftl_p2l_log.o 00:04:37.347 CC lib/ftl/mngt/ftl_mngt.o 00:04:37.608 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:37.608 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:37.608 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:37.608 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:37.608 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:37.608 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:37.608 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:37.869 CC lib/iscsi/conn.o 00:04:37.869 CC lib/iscsi/init_grp.o 00:04:37.869 CC lib/vhost/vhost.o 00:04:37.869 CC lib/iscsi/iscsi.o 00:04:37.869 CC lib/iscsi/param.o 00:04:37.869 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:37.869 CC lib/iscsi/portal_grp.o 00:04:37.869 CC lib/iscsi/tgt_node.o 00:04:38.130 CC lib/iscsi/iscsi_subsystem.o 00:04:38.130 CC lib/iscsi/iscsi_rpc.o 00:04:38.130 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:38.130 CC lib/iscsi/task.o 00:04:38.130 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:38.392 CC lib/vhost/vhost_rpc.o 00:04:38.392 CC lib/vhost/vhost_scsi.o 00:04:38.392 CC lib/vhost/vhost_blk.o 00:04:38.392 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:38.392 CC lib/vhost/rte_vhost_user.o 00:04:38.392 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:38.392 CC lib/ftl/utils/ftl_conf.o 00:04:38.392 CC lib/ftl/utils/ftl_md.o 00:04:38.653 CC lib/ftl/utils/ftl_mempool.o 00:04:38.653 CC lib/ftl/utils/ftl_bitmap.o 00:04:38.653 CC lib/ftl/utils/ftl_property.o 00:04:38.653 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:38.653 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:38.915 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:38.915 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:38.915 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:38.915 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:38.915 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:38.915 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:38.915 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:38.915 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:38.915 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:39.176 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:39.176 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:39.176 CC lib/ftl/base/ftl_base_dev.o 00:04:39.176 CC lib/ftl/base/ftl_base_bdev.o 00:04:39.176 CC lib/ftl/ftl_trace.o 00:04:39.176 LIB libspdk_nvmf.a 00:04:39.176 LIB libspdk_iscsi.a 00:04:39.176 LIB libspdk_vhost.a 00:04:39.176 SO libspdk_nvmf.so.20.0 00:04:39.436 SO libspdk_vhost.so.8.0 00:04:39.436 SO libspdk_iscsi.so.8.0 00:04:39.436 LIB libspdk_ftl.a 00:04:39.436 SYMLINK libspdk_vhost.so 00:04:39.436 SYMLINK libspdk_iscsi.so 00:04:39.436 SYMLINK libspdk_nvmf.so 00:04:39.436 SO libspdk_ftl.so.9.0 00:04:39.695 SYMLINK libspdk_ftl.so 00:04:39.956 CC module/env_dpdk/env_dpdk_rpc.o 00:04:40.224 CC module/accel/iaa/accel_iaa.o 00:04:40.224 CC module/keyring/file/keyring.o 00:04:40.224 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:40.224 CC module/fsdev/aio/fsdev_aio.o 00:04:40.224 CC module/accel/ioat/accel_ioat.o 00:04:40.224 CC module/sock/posix/posix.o 00:04:40.224 CC module/accel/error/accel_error.o 00:04:40.224 CC module/blob/bdev/blob_bdev.o 00:04:40.224 CC module/accel/dsa/accel_dsa.o 00:04:40.224 LIB libspdk_env_dpdk_rpc.a 00:04:40.224 SO libspdk_env_dpdk_rpc.so.6.0 00:04:40.224 SYMLINK libspdk_env_dpdk_rpc.so 00:04:40.224 CC module/accel/iaa/accel_iaa_rpc.o 00:04:40.224 CC module/keyring/file/keyring_rpc.o 00:04:40.224 CC module/accel/ioat/accel_ioat_rpc.o 00:04:40.224 LIB libspdk_scheduler_dynamic.a 00:04:40.224 SO libspdk_scheduler_dynamic.so.4.0 00:04:40.224 CC module/accel/error/accel_error_rpc.o 00:04:40.224 LIB libspdk_accel_iaa.a 00:04:40.224 LIB libspdk_keyring_file.a 00:04:40.224 SYMLINK libspdk_scheduler_dynamic.so 00:04:40.224 CC module/accel/dsa/accel_dsa_rpc.o 00:04:40.224 LIB libspdk_blob_bdev.a 00:04:40.224 SO libspdk_keyring_file.so.2.0 00:04:40.224 SO libspdk_accel_iaa.so.3.0 00:04:40.224 LIB libspdk_accel_ioat.a 00:04:40.224 SO libspdk_blob_bdev.so.11.0 00:04:40.224 SO libspdk_accel_ioat.so.6.0 00:04:40.224 SYMLINK libspdk_keyring_file.so 00:04:40.224 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:40.224 SYMLINK libspdk_blob_bdev.so 00:04:40.224 LIB libspdk_accel_error.a 00:04:40.224 SYMLINK libspdk_accel_iaa.so 00:04:40.489 SO libspdk_accel_error.so.2.0 00:04:40.489 SYMLINK libspdk_accel_ioat.so 00:04:40.489 LIB libspdk_accel_dsa.a 00:04:40.489 CC module/fsdev/aio/linux_aio_mgr.o 00:04:40.489 SO libspdk_accel_dsa.so.5.0 00:04:40.489 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:40.489 SYMLINK libspdk_accel_error.so 00:04:40.489 SYMLINK libspdk_accel_dsa.so 00:04:40.489 CC module/keyring/linux/keyring.o 00:04:40.489 CC module/scheduler/gscheduler/gscheduler.o 00:04:40.489 LIB libspdk_scheduler_dpdk_governor.a 00:04:40.489 CC module/bdev/delay/vbdev_delay.o 00:04:40.489 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:40.489 CC module/bdev/error/vbdev_error.o 00:04:40.489 CC module/keyring/linux/keyring_rpc.o 00:04:40.489 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:40.489 CC module/bdev/gpt/gpt.o 00:04:40.750 LIB libspdk_scheduler_gscheduler.a 00:04:40.750 CC module/blobfs/bdev/blobfs_bdev.o 00:04:40.750 SO libspdk_scheduler_gscheduler.so.4.0 00:04:40.750 CC module/bdev/lvol/vbdev_lvol.o 00:04:40.750 LIB libspdk_sock_posix.a 00:04:40.750 SO libspdk_sock_posix.so.6.0 00:04:40.750 SYMLINK libspdk_scheduler_gscheduler.so 00:04:40.750 LIB libspdk_keyring_linux.a 00:04:40.750 CC module/bdev/error/vbdev_error_rpc.o 00:04:40.750 CC module/bdev/malloc/bdev_malloc.o 00:04:40.750 SO libspdk_keyring_linux.so.1.0 00:04:40.750 CC module/bdev/gpt/vbdev_gpt.o 00:04:40.750 SYMLINK libspdk_sock_posix.so 00:04:40.750 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:40.750 LIB libspdk_fsdev_aio.a 00:04:40.750 SYMLINK libspdk_keyring_linux.so 00:04:40.750 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:40.750 SO libspdk_fsdev_aio.so.1.0 00:04:40.750 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:40.750 LIB libspdk_bdev_error.a 00:04:40.750 SO libspdk_bdev_error.so.6.0 00:04:40.750 SYMLINK libspdk_fsdev_aio.so 00:04:40.750 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:41.009 LIB libspdk_blobfs_bdev.a 00:04:41.009 CC module/bdev/null/bdev_null.o 00:04:41.009 SO libspdk_blobfs_bdev.so.6.0 00:04:41.009 SYMLINK libspdk_bdev_error.so 00:04:41.009 CC module/bdev/null/bdev_null_rpc.o 00:04:41.009 CC module/bdev/nvme/bdev_nvme.o 00:04:41.009 SYMLINK libspdk_blobfs_bdev.so 00:04:41.009 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:41.009 LIB libspdk_bdev_delay.a 00:04:41.009 LIB libspdk_bdev_gpt.a 00:04:41.009 SO libspdk_bdev_delay.so.6.0 00:04:41.009 CC module/bdev/passthru/vbdev_passthru.o 00:04:41.009 SO libspdk_bdev_gpt.so.6.0 00:04:41.009 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:41.009 CC module/bdev/nvme/nvme_rpc.o 00:04:41.009 SYMLINK libspdk_bdev_delay.so 00:04:41.009 CC module/bdev/nvme/bdev_mdns_client.o 00:04:41.009 SYMLINK libspdk_bdev_gpt.so 00:04:41.009 LIB libspdk_bdev_null.a 00:04:41.009 LIB libspdk_bdev_malloc.a 00:04:41.009 SO libspdk_bdev_null.so.6.0 00:04:41.009 SO libspdk_bdev_malloc.so.6.0 00:04:41.270 CC module/bdev/nvme/vbdev_opal.o 00:04:41.270 LIB libspdk_bdev_lvol.a 00:04:41.270 SYMLINK libspdk_bdev_null.so 00:04:41.270 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:41.270 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:41.270 SYMLINK libspdk_bdev_malloc.so 00:04:41.270 SO libspdk_bdev_lvol.so.6.0 00:04:41.270 LIB libspdk_bdev_passthru.a 00:04:41.270 CC module/bdev/raid/bdev_raid.o 00:04:41.270 SYMLINK libspdk_bdev_lvol.so 00:04:41.270 SO libspdk_bdev_passthru.so.6.0 00:04:41.270 SYMLINK libspdk_bdev_passthru.so 00:04:41.270 CC module/bdev/raid/bdev_raid_rpc.o 00:04:41.270 CC module/bdev/split/vbdev_split.o 00:04:41.270 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:41.270 CC module/bdev/xnvme/bdev_xnvme.o 00:04:41.530 CC module/bdev/aio/bdev_aio.o 00:04:41.530 CC module/bdev/ftl/bdev_ftl.o 00:04:41.530 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:41.530 CC module/bdev/iscsi/bdev_iscsi.o 00:04:41.530 CC module/bdev/split/vbdev_split_rpc.o 00:04:41.530 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:41.530 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:41.530 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:41.530 LIB libspdk_bdev_split.a 00:04:41.530 CC module/bdev/raid/bdev_raid_sb.o 00:04:41.530 SO libspdk_bdev_split.so.6.0 00:04:41.530 LIB libspdk_bdev_xnvme.a 00:04:41.790 LIB libspdk_bdev_zone_block.a 00:04:41.791 SO libspdk_bdev_xnvme.so.3.0 00:04:41.791 SO libspdk_bdev_zone_block.so.6.0 00:04:41.791 CC module/bdev/aio/bdev_aio_rpc.o 00:04:41.791 CC module/bdev/raid/raid0.o 00:04:41.791 SYMLINK libspdk_bdev_split.so 00:04:41.791 CC module/bdev/raid/raid1.o 00:04:41.791 SYMLINK libspdk_bdev_xnvme.so 00:04:41.791 CC module/bdev/raid/concat.o 00:04:41.791 LIB libspdk_bdev_iscsi.a 00:04:41.791 SYMLINK libspdk_bdev_zone_block.so 00:04:41.791 SO libspdk_bdev_iscsi.so.6.0 00:04:41.791 LIB libspdk_bdev_ftl.a 00:04:41.791 SO libspdk_bdev_ftl.so.6.0 00:04:41.791 SYMLINK libspdk_bdev_iscsi.so 00:04:41.791 SYMLINK libspdk_bdev_ftl.so 00:04:41.791 LIB libspdk_bdev_aio.a 00:04:41.791 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:41.791 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:41.791 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:41.791 SO libspdk_bdev_aio.so.6.0 00:04:41.791 SYMLINK libspdk_bdev_aio.so 00:04:42.051 LIB libspdk_bdev_raid.a 00:04:42.051 SO libspdk_bdev_raid.so.6.0 00:04:42.311 SYMLINK libspdk_bdev_raid.so 00:04:42.311 LIB libspdk_bdev_virtio.a 00:04:42.311 SO libspdk_bdev_virtio.so.6.0 00:04:42.311 SYMLINK libspdk_bdev_virtio.so 00:04:43.694 LIB libspdk_bdev_nvme.a 00:04:43.694 SO libspdk_bdev_nvme.so.7.1 00:04:43.694 SYMLINK libspdk_bdev_nvme.so 00:04:44.265 CC module/event/subsystems/fsdev/fsdev.o 00:04:44.265 CC module/event/subsystems/keyring/keyring.o 00:04:44.265 CC module/event/subsystems/vmd/vmd.o 00:04:44.265 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:44.265 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:44.265 CC module/event/subsystems/iobuf/iobuf.o 00:04:44.265 CC module/event/subsystems/scheduler/scheduler.o 00:04:44.265 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:44.265 CC module/event/subsystems/sock/sock.o 00:04:44.265 LIB libspdk_event_keyring.a 00:04:44.265 LIB libspdk_event_fsdev.a 00:04:44.265 LIB libspdk_event_vhost_blk.a 00:04:44.265 LIB libspdk_event_scheduler.a 00:04:44.265 LIB libspdk_event_vmd.a 00:04:44.265 SO libspdk_event_keyring.so.1.0 00:04:44.265 LIB libspdk_event_sock.a 00:04:44.265 SO libspdk_event_fsdev.so.1.0 00:04:44.265 LIB libspdk_event_iobuf.a 00:04:44.265 SO libspdk_event_vhost_blk.so.3.0 00:04:44.265 SO libspdk_event_scheduler.so.4.0 00:04:44.265 SO libspdk_event_vmd.so.6.0 00:04:44.265 SO libspdk_event_sock.so.5.0 00:04:44.265 SO libspdk_event_iobuf.so.3.0 00:04:44.265 SYMLINK libspdk_event_keyring.so 00:04:44.265 SYMLINK libspdk_event_vhost_blk.so 00:04:44.265 SYMLINK libspdk_event_fsdev.so 00:04:44.265 SYMLINK libspdk_event_scheduler.so 00:04:44.265 SYMLINK libspdk_event_sock.so 00:04:44.265 SYMLINK libspdk_event_vmd.so 00:04:44.265 SYMLINK libspdk_event_iobuf.so 00:04:44.525 CC module/event/subsystems/accel/accel.o 00:04:44.786 LIB libspdk_event_accel.a 00:04:44.786 SO libspdk_event_accel.so.6.0 00:04:44.786 SYMLINK libspdk_event_accel.so 00:04:45.048 CC module/event/subsystems/bdev/bdev.o 00:04:45.309 LIB libspdk_event_bdev.a 00:04:45.309 SO libspdk_event_bdev.so.6.0 00:04:45.309 SYMLINK libspdk_event_bdev.so 00:04:45.572 CC module/event/subsystems/scsi/scsi.o 00:04:45.572 CC module/event/subsystems/nbd/nbd.o 00:04:45.572 CC module/event/subsystems/ublk/ublk.o 00:04:45.572 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:45.572 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:45.572 LIB libspdk_event_nbd.a 00:04:45.572 LIB libspdk_event_ublk.a 00:04:45.572 LIB libspdk_event_scsi.a 00:04:45.572 SO libspdk_event_nbd.so.6.0 00:04:45.572 SO libspdk_event_scsi.so.6.0 00:04:45.572 SO libspdk_event_ublk.so.3.0 00:04:45.572 SYMLINK libspdk_event_ublk.so 00:04:45.572 SYMLINK libspdk_event_scsi.so 00:04:45.572 SYMLINK libspdk_event_nbd.so 00:04:45.572 LIB libspdk_event_nvmf.a 00:04:45.834 SO libspdk_event_nvmf.so.6.0 00:04:45.834 SYMLINK libspdk_event_nvmf.so 00:04:45.834 CC module/event/subsystems/iscsi/iscsi.o 00:04:45.834 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:46.097 LIB libspdk_event_vhost_scsi.a 00:04:46.097 LIB libspdk_event_iscsi.a 00:04:46.097 SO libspdk_event_iscsi.so.6.0 00:04:46.097 SO libspdk_event_vhost_scsi.so.3.0 00:04:46.097 SYMLINK libspdk_event_vhost_scsi.so 00:04:46.097 SYMLINK libspdk_event_iscsi.so 00:04:46.097 SO libspdk.so.6.0 00:04:46.359 SYMLINK libspdk.so 00:04:46.359 CC test/rpc_client/rpc_client_test.o 00:04:46.359 CXX app/trace/trace.o 00:04:46.359 TEST_HEADER include/spdk/accel.h 00:04:46.359 TEST_HEADER include/spdk/accel_module.h 00:04:46.359 TEST_HEADER include/spdk/assert.h 00:04:46.359 TEST_HEADER include/spdk/barrier.h 00:04:46.359 TEST_HEADER include/spdk/base64.h 00:04:46.359 TEST_HEADER include/spdk/bdev.h 00:04:46.359 TEST_HEADER include/spdk/bdev_module.h 00:04:46.359 TEST_HEADER include/spdk/bdev_zone.h 00:04:46.359 TEST_HEADER include/spdk/bit_array.h 00:04:46.359 TEST_HEADER include/spdk/bit_pool.h 00:04:46.359 TEST_HEADER include/spdk/blob_bdev.h 00:04:46.359 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:46.359 TEST_HEADER include/spdk/blobfs.h 00:04:46.359 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:46.359 TEST_HEADER include/spdk/blob.h 00:04:46.359 TEST_HEADER include/spdk/conf.h 00:04:46.359 TEST_HEADER include/spdk/config.h 00:04:46.621 TEST_HEADER include/spdk/cpuset.h 00:04:46.621 TEST_HEADER include/spdk/crc16.h 00:04:46.621 TEST_HEADER include/spdk/crc32.h 00:04:46.621 TEST_HEADER include/spdk/crc64.h 00:04:46.621 TEST_HEADER include/spdk/dif.h 00:04:46.621 TEST_HEADER include/spdk/dma.h 00:04:46.621 TEST_HEADER include/spdk/endian.h 00:04:46.621 TEST_HEADER include/spdk/env_dpdk.h 00:04:46.621 TEST_HEADER include/spdk/env.h 00:04:46.621 TEST_HEADER include/spdk/event.h 00:04:46.621 CC examples/ioat/perf/perf.o 00:04:46.621 TEST_HEADER include/spdk/fd_group.h 00:04:46.621 TEST_HEADER include/spdk/fd.h 00:04:46.621 TEST_HEADER include/spdk/file.h 00:04:46.621 CC test/thread/poller_perf/poller_perf.o 00:04:46.622 CC examples/util/zipf/zipf.o 00:04:46.622 TEST_HEADER include/spdk/fsdev.h 00:04:46.622 TEST_HEADER include/spdk/fsdev_module.h 00:04:46.622 TEST_HEADER include/spdk/ftl.h 00:04:46.622 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:46.622 TEST_HEADER include/spdk/gpt_spec.h 00:04:46.622 TEST_HEADER include/spdk/hexlify.h 00:04:46.622 TEST_HEADER include/spdk/histogram_data.h 00:04:46.622 TEST_HEADER include/spdk/idxd.h 00:04:46.622 TEST_HEADER include/spdk/idxd_spec.h 00:04:46.622 TEST_HEADER include/spdk/init.h 00:04:46.622 TEST_HEADER include/spdk/ioat.h 00:04:46.622 CC test/app/bdev_svc/bdev_svc.o 00:04:46.622 TEST_HEADER include/spdk/ioat_spec.h 00:04:46.622 TEST_HEADER include/spdk/iscsi_spec.h 00:04:46.622 TEST_HEADER include/spdk/json.h 00:04:46.622 TEST_HEADER include/spdk/jsonrpc.h 00:04:46.622 TEST_HEADER include/spdk/keyring.h 00:04:46.622 TEST_HEADER include/spdk/keyring_module.h 00:04:46.622 TEST_HEADER include/spdk/likely.h 00:04:46.622 TEST_HEADER include/spdk/log.h 00:04:46.622 TEST_HEADER include/spdk/lvol.h 00:04:46.622 TEST_HEADER include/spdk/md5.h 00:04:46.622 CC test/dma/test_dma/test_dma.o 00:04:46.622 TEST_HEADER include/spdk/memory.h 00:04:46.622 TEST_HEADER include/spdk/mmio.h 00:04:46.622 TEST_HEADER include/spdk/nbd.h 00:04:46.622 TEST_HEADER include/spdk/net.h 00:04:46.622 TEST_HEADER include/spdk/notify.h 00:04:46.622 TEST_HEADER include/spdk/nvme.h 00:04:46.622 TEST_HEADER include/spdk/nvme_intel.h 00:04:46.622 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:46.622 CC test/env/mem_callbacks/mem_callbacks.o 00:04:46.622 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:46.622 TEST_HEADER include/spdk/nvme_spec.h 00:04:46.622 TEST_HEADER include/spdk/nvme_zns.h 00:04:46.622 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:46.622 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:46.622 TEST_HEADER include/spdk/nvmf.h 00:04:46.622 TEST_HEADER include/spdk/nvmf_spec.h 00:04:46.622 TEST_HEADER include/spdk/nvmf_transport.h 00:04:46.622 TEST_HEADER include/spdk/opal.h 00:04:46.622 TEST_HEADER include/spdk/opal_spec.h 00:04:46.622 TEST_HEADER include/spdk/pci_ids.h 00:04:46.622 TEST_HEADER include/spdk/pipe.h 00:04:46.622 TEST_HEADER include/spdk/queue.h 00:04:46.622 TEST_HEADER include/spdk/reduce.h 00:04:46.622 TEST_HEADER include/spdk/rpc.h 00:04:46.622 TEST_HEADER include/spdk/scheduler.h 00:04:46.622 TEST_HEADER include/spdk/scsi.h 00:04:46.622 TEST_HEADER include/spdk/scsi_spec.h 00:04:46.622 LINK rpc_client_test 00:04:46.622 TEST_HEADER include/spdk/sock.h 00:04:46.622 TEST_HEADER include/spdk/stdinc.h 00:04:46.622 TEST_HEADER include/spdk/string.h 00:04:46.622 TEST_HEADER include/spdk/thread.h 00:04:46.622 TEST_HEADER include/spdk/trace.h 00:04:46.622 TEST_HEADER include/spdk/trace_parser.h 00:04:46.622 TEST_HEADER include/spdk/tree.h 00:04:46.622 TEST_HEADER include/spdk/ublk.h 00:04:46.622 LINK interrupt_tgt 00:04:46.622 LINK poller_perf 00:04:46.622 TEST_HEADER include/spdk/util.h 00:04:46.622 TEST_HEADER include/spdk/uuid.h 00:04:46.622 TEST_HEADER include/spdk/version.h 00:04:46.622 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:46.622 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:46.622 TEST_HEADER include/spdk/vhost.h 00:04:46.622 TEST_HEADER include/spdk/vmd.h 00:04:46.622 TEST_HEADER include/spdk/xor.h 00:04:46.622 TEST_HEADER include/spdk/zipf.h 00:04:46.622 CXX test/cpp_headers/accel.o 00:04:46.622 LINK zipf 00:04:46.622 LINK ioat_perf 00:04:46.622 LINK bdev_svc 00:04:46.884 LINK spdk_trace 00:04:46.884 CXX test/cpp_headers/accel_module.o 00:04:46.884 CC examples/ioat/verify/verify.o 00:04:46.884 CC app/trace_record/trace_record.o 00:04:46.884 CC app/nvmf_tgt/nvmf_main.o 00:04:46.884 CC test/app/histogram_perf/histogram_perf.o 00:04:46.884 CXX test/cpp_headers/assert.o 00:04:46.884 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:47.146 LINK mem_callbacks 00:04:47.146 LINK test_dma 00:04:47.146 LINK verify 00:04:47.146 LINK histogram_perf 00:04:47.146 CC test/event/event_perf/event_perf.o 00:04:47.146 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:47.146 CXX test/cpp_headers/barrier.o 00:04:47.146 LINK nvmf_tgt 00:04:47.146 LINK spdk_trace_record 00:04:47.146 LINK event_perf 00:04:47.146 CC test/env/vtophys/vtophys.o 00:04:47.146 CXX test/cpp_headers/base64.o 00:04:47.146 CXX test/cpp_headers/bdev.o 00:04:47.146 CXX test/cpp_headers/bdev_module.o 00:04:47.407 LINK vtophys 00:04:47.407 CC examples/sock/hello_world/hello_sock.o 00:04:47.407 CC examples/thread/thread/thread_ex.o 00:04:47.407 CC app/iscsi_tgt/iscsi_tgt.o 00:04:47.407 CXX test/cpp_headers/bdev_zone.o 00:04:47.407 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:47.407 CC test/event/reactor/reactor.o 00:04:47.407 LINK nvme_fuzz 00:04:47.407 CC test/app/jsoncat/jsoncat.o 00:04:47.407 CXX test/cpp_headers/bit_array.o 00:04:47.407 LINK reactor 00:04:47.407 LINK hello_sock 00:04:47.407 LINK env_dpdk_post_init 00:04:47.407 LINK jsoncat 00:04:47.669 LINK iscsi_tgt 00:04:47.669 CC test/env/memory/memory_ut.o 00:04:47.669 CC test/app/stub/stub.o 00:04:47.669 CXX test/cpp_headers/bit_pool.o 00:04:47.669 LINK thread 00:04:47.669 CXX test/cpp_headers/blob_bdev.o 00:04:47.669 CXX test/cpp_headers/blobfs_bdev.o 00:04:47.669 CXX test/cpp_headers/blobfs.o 00:04:47.669 CC test/event/reactor_perf/reactor_perf.o 00:04:47.669 LINK stub 00:04:47.930 CXX test/cpp_headers/blob.o 00:04:47.930 CC app/spdk_tgt/spdk_tgt.o 00:04:47.930 CC test/env/pci/pci_ut.o 00:04:47.930 LINK reactor_perf 00:04:47.930 CC examples/vmd/lsvmd/lsvmd.o 00:04:47.930 CC test/accel/dif/dif.o 00:04:47.930 CXX test/cpp_headers/conf.o 00:04:47.930 CC test/blobfs/mkfs/mkfs.o 00:04:47.930 LINK spdk_tgt 00:04:47.930 LINK lsvmd 00:04:47.930 CC test/event/app_repeat/app_repeat.o 00:04:47.930 CC examples/idxd/perf/perf.o 00:04:48.191 CXX test/cpp_headers/config.o 00:04:48.191 CXX test/cpp_headers/cpuset.o 00:04:48.191 LINK mkfs 00:04:48.191 LINK app_repeat 00:04:48.191 CC examples/vmd/led/led.o 00:04:48.191 CC app/spdk_lspci/spdk_lspci.o 00:04:48.191 CXX test/cpp_headers/crc16.o 00:04:48.191 LINK pci_ut 00:04:48.191 CXX test/cpp_headers/crc32.o 00:04:48.453 LINK led 00:04:48.453 LINK spdk_lspci 00:04:48.453 LINK idxd_perf 00:04:48.453 CC test/event/scheduler/scheduler.o 00:04:48.453 CXX test/cpp_headers/crc64.o 00:04:48.453 CC app/spdk_nvme_perf/perf.o 00:04:48.453 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:48.453 CC test/lvol/esnap/esnap.o 00:04:48.714 CXX test/cpp_headers/dif.o 00:04:48.714 CC test/nvme/aer/aer.o 00:04:48.714 LINK dif 00:04:48.714 LINK iscsi_fuzz 00:04:48.714 LINK scheduler 00:04:48.714 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:48.714 LINK memory_ut 00:04:48.714 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:48.714 CXX test/cpp_headers/dma.o 00:04:48.714 CXX test/cpp_headers/endian.o 00:04:48.714 CXX test/cpp_headers/env_dpdk.o 00:04:48.714 CXX test/cpp_headers/env.o 00:04:48.714 CXX test/cpp_headers/event.o 00:04:48.975 LINK aer 00:04:48.975 CXX test/cpp_headers/fd_group.o 00:04:48.975 CXX test/cpp_headers/fd.o 00:04:48.975 LINK hello_fsdev 00:04:48.975 CC examples/accel/perf/accel_perf.o 00:04:48.975 CXX test/cpp_headers/file.o 00:04:48.975 CC test/nvme/reset/reset.o 00:04:48.975 CC test/nvme/sgl/sgl.o 00:04:48.975 LINK vhost_fuzz 00:04:48.975 CXX test/cpp_headers/fsdev.o 00:04:49.237 CC test/nvme/e2edp/nvme_dp.o 00:04:49.237 CC examples/nvme/hello_world/hello_world.o 00:04:49.237 CC examples/blob/hello_world/hello_blob.o 00:04:49.237 CXX test/cpp_headers/fsdev_module.o 00:04:49.237 LINK reset 00:04:49.237 CC examples/nvme/reconnect/reconnect.o 00:04:49.237 LINK sgl 00:04:49.237 CXX test/cpp_headers/ftl.o 00:04:49.498 LINK nvme_dp 00:04:49.498 LINK spdk_nvme_perf 00:04:49.498 LINK hello_blob 00:04:49.498 LINK hello_world 00:04:49.498 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:49.498 LINK accel_perf 00:04:49.498 CXX test/cpp_headers/fuse_dispatcher.o 00:04:49.498 CC examples/nvme/arbitration/arbitration.o 00:04:49.498 CC test/nvme/overhead/overhead.o 00:04:49.498 CXX test/cpp_headers/gpt_spec.o 00:04:49.498 LINK reconnect 00:04:49.498 CC test/nvme/err_injection/err_injection.o 00:04:49.498 CC app/spdk_nvme_identify/identify.o 00:04:49.758 CC test/nvme/startup/startup.o 00:04:49.758 CC examples/blob/cli/blobcli.o 00:04:49.758 LINK arbitration 00:04:49.758 CXX test/cpp_headers/hexlify.o 00:04:49.758 LINK err_injection 00:04:49.758 LINK startup 00:04:49.758 CC app/spdk_nvme_discover/discovery_aer.o 00:04:49.758 LINK overhead 00:04:49.758 LINK nvme_manage 00:04:49.758 CXX test/cpp_headers/histogram_data.o 00:04:49.758 CC test/nvme/reserve/reserve.o 00:04:50.017 CC app/spdk_top/spdk_top.o 00:04:50.017 CXX test/cpp_headers/idxd.o 00:04:50.017 LINK spdk_nvme_discover 00:04:50.017 CC examples/nvme/hotplug/hotplug.o 00:04:50.017 CC app/vhost/vhost.o 00:04:50.017 LINK reserve 00:04:50.017 CC examples/bdev/hello_world/hello_bdev.o 00:04:50.017 CXX test/cpp_headers/idxd_spec.o 00:04:50.275 LINK blobcli 00:04:50.275 LINK hotplug 00:04:50.275 CC test/nvme/simple_copy/simple_copy.o 00:04:50.275 CC examples/bdev/bdevperf/bdevperf.o 00:04:50.275 CXX test/cpp_headers/init.o 00:04:50.275 LINK vhost 00:04:50.275 LINK hello_bdev 00:04:50.275 CXX test/cpp_headers/ioat.o 00:04:50.275 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:50.275 LINK simple_copy 00:04:50.275 CXX test/cpp_headers/ioat_spec.o 00:04:50.275 LINK spdk_nvme_identify 00:04:50.275 CXX test/cpp_headers/iscsi_spec.o 00:04:50.534 CXX test/cpp_headers/json.o 00:04:50.534 LINK cmb_copy 00:04:50.534 CC test/bdev/bdevio/bdevio.o 00:04:50.534 CC test/nvme/connect_stress/connect_stress.o 00:04:50.534 CXX test/cpp_headers/jsonrpc.o 00:04:50.534 CC test/nvme/boot_partition/boot_partition.o 00:04:50.534 CC examples/nvme/abort/abort.o 00:04:50.534 CC test/nvme/compliance/nvme_compliance.o 00:04:50.534 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:50.793 LINK boot_partition 00:04:50.793 LINK connect_stress 00:04:50.794 CXX test/cpp_headers/keyring.o 00:04:50.794 LINK pmr_persistence 00:04:50.794 LINK spdk_top 00:04:50.794 LINK bdevio 00:04:50.794 CXX test/cpp_headers/keyring_module.o 00:04:50.794 CC test/nvme/fused_ordering/fused_ordering.o 00:04:50.794 CXX test/cpp_headers/likely.o 00:04:50.794 CC app/spdk_dd/spdk_dd.o 00:04:50.794 LINK abort 00:04:51.052 CXX test/cpp_headers/log.o 00:04:51.052 LINK nvme_compliance 00:04:51.052 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:51.052 LINK fused_ordering 00:04:51.052 LINK bdevperf 00:04:51.052 CC test/nvme/cuse/cuse.o 00:04:51.052 CC test/nvme/fdp/fdp.o 00:04:51.052 CXX test/cpp_headers/lvol.o 00:04:51.052 CC app/fio/nvme/fio_plugin.o 00:04:51.052 CXX test/cpp_headers/md5.o 00:04:51.052 LINK doorbell_aers 00:04:51.052 LINK spdk_dd 00:04:51.311 CC app/fio/bdev/fio_plugin.o 00:04:51.311 CXX test/cpp_headers/memory.o 00:04:51.311 CXX test/cpp_headers/mmio.o 00:04:51.311 CXX test/cpp_headers/nbd.o 00:04:51.311 CXX test/cpp_headers/net.o 00:04:51.311 CXX test/cpp_headers/notify.o 00:04:51.311 CC examples/nvmf/nvmf/nvmf.o 00:04:51.311 LINK fdp 00:04:51.311 CXX test/cpp_headers/nvme.o 00:04:51.311 CXX test/cpp_headers/nvme_intel.o 00:04:51.311 CXX test/cpp_headers/nvme_ocssd.o 00:04:51.311 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:51.570 CXX test/cpp_headers/nvme_spec.o 00:04:51.570 CXX test/cpp_headers/nvme_zns.o 00:04:51.570 CXX test/cpp_headers/nvmf_cmd.o 00:04:51.570 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:51.570 LINK nvmf 00:04:51.570 LINK spdk_bdev 00:04:51.570 CXX test/cpp_headers/nvmf.o 00:04:51.570 CXX test/cpp_headers/nvmf_spec.o 00:04:51.570 LINK spdk_nvme 00:04:51.570 CXX test/cpp_headers/nvmf_transport.o 00:04:51.570 CXX test/cpp_headers/opal.o 00:04:51.570 CXX test/cpp_headers/opal_spec.o 00:04:51.570 CXX test/cpp_headers/pci_ids.o 00:04:51.570 CXX test/cpp_headers/pipe.o 00:04:51.570 CXX test/cpp_headers/queue.o 00:04:51.570 CXX test/cpp_headers/reduce.o 00:04:51.828 CXX test/cpp_headers/rpc.o 00:04:51.828 CXX test/cpp_headers/scheduler.o 00:04:51.828 CXX test/cpp_headers/scsi.o 00:04:51.828 CXX test/cpp_headers/scsi_spec.o 00:04:51.828 CXX test/cpp_headers/sock.o 00:04:51.828 CXX test/cpp_headers/stdinc.o 00:04:51.828 CXX test/cpp_headers/string.o 00:04:51.828 CXX test/cpp_headers/thread.o 00:04:51.828 CXX test/cpp_headers/trace.o 00:04:51.828 CXX test/cpp_headers/trace_parser.o 00:04:51.828 CXX test/cpp_headers/tree.o 00:04:51.828 CXX test/cpp_headers/ublk.o 00:04:51.828 CXX test/cpp_headers/util.o 00:04:51.828 CXX test/cpp_headers/uuid.o 00:04:51.828 CXX test/cpp_headers/version.o 00:04:51.828 CXX test/cpp_headers/vfio_user_pci.o 00:04:51.828 CXX test/cpp_headers/vfio_user_spec.o 00:04:51.828 CXX test/cpp_headers/vhost.o 00:04:52.089 CXX test/cpp_headers/vmd.o 00:04:52.089 CXX test/cpp_headers/xor.o 00:04:52.089 CXX test/cpp_headers/zipf.o 00:04:52.089 LINK cuse 00:04:53.033 LINK esnap 00:04:53.607 00:04:53.607 real 0m59.049s 00:04:53.607 user 4m55.012s 00:04:53.607 sys 0m49.466s 00:04:53.607 ************************************ 00:04:53.607 END TEST make 00:04:53.607 ************************************ 00:04:53.607 06:38:46 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:53.607 06:38:46 make -- common/autotest_common.sh@10 -- $ set +x 00:04:53.607 06:38:46 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:53.607 06:38:46 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:53.607 06:38:46 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:53.607 06:38:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:53.607 06:38:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:53.607 06:38:46 -- pm/common@44 -- $ pid=5811 00:04:53.607 06:38:46 -- pm/common@50 -- $ kill -TERM 5811 00:04:53.607 06:38:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:53.607 06:38:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:53.607 06:38:46 -- pm/common@44 -- $ pid=5812 00:04:53.607 06:38:46 -- pm/common@50 -- $ kill -TERM 5812 00:04:53.607 06:38:46 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:53.607 06:38:46 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:53.607 06:38:46 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:53.607 06:38:46 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:53.607 06:38:46 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:53.607 06:38:46 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:53.607 06:38:46 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.607 06:38:46 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.607 06:38:46 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.607 06:38:46 -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.607 06:38:46 -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.607 06:38:46 -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.607 06:38:46 -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.607 06:38:46 -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.607 06:38:46 -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.607 06:38:46 -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.607 06:38:46 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.607 06:38:46 -- scripts/common.sh@344 -- # case "$op" in 00:04:53.607 06:38:46 -- scripts/common.sh@345 -- # : 1 00:04:53.607 06:38:46 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.607 06:38:46 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.607 06:38:46 -- scripts/common.sh@365 -- # decimal 1 00:04:53.607 06:38:46 -- scripts/common.sh@353 -- # local d=1 00:04:53.608 06:38:46 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.608 06:38:46 -- scripts/common.sh@355 -- # echo 1 00:04:53.608 06:38:46 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.608 06:38:46 -- scripts/common.sh@366 -- # decimal 2 00:04:53.608 06:38:46 -- scripts/common.sh@353 -- # local d=2 00:04:53.608 06:38:46 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.608 06:38:46 -- scripts/common.sh@355 -- # echo 2 00:04:53.608 06:38:46 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.608 06:38:46 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.608 06:38:46 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.608 06:38:46 -- scripts/common.sh@368 -- # return 0 00:04:53.608 06:38:46 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.608 06:38:46 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:53.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.608 --rc genhtml_branch_coverage=1 00:04:53.608 --rc genhtml_function_coverage=1 00:04:53.608 --rc genhtml_legend=1 00:04:53.608 --rc geninfo_all_blocks=1 00:04:53.608 --rc geninfo_unexecuted_blocks=1 00:04:53.608 00:04:53.608 ' 00:04:53.608 06:38:46 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:53.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.608 --rc genhtml_branch_coverage=1 00:04:53.608 --rc genhtml_function_coverage=1 00:04:53.608 --rc genhtml_legend=1 00:04:53.608 --rc geninfo_all_blocks=1 00:04:53.608 --rc geninfo_unexecuted_blocks=1 00:04:53.608 00:04:53.608 ' 00:04:53.608 06:38:46 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:53.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.608 --rc genhtml_branch_coverage=1 00:04:53.608 --rc genhtml_function_coverage=1 00:04:53.608 --rc genhtml_legend=1 00:04:53.608 --rc geninfo_all_blocks=1 00:04:53.608 --rc geninfo_unexecuted_blocks=1 00:04:53.608 00:04:53.608 ' 00:04:53.608 06:38:46 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:53.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.608 --rc genhtml_branch_coverage=1 00:04:53.608 --rc genhtml_function_coverage=1 00:04:53.608 --rc genhtml_legend=1 00:04:53.608 --rc geninfo_all_blocks=1 00:04:53.608 --rc geninfo_unexecuted_blocks=1 00:04:53.608 00:04:53.608 ' 00:04:53.608 06:38:46 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:53.608 06:38:46 -- nvmf/common.sh@7 -- # uname -s 00:04:53.608 06:38:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:53.608 06:38:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:53.608 06:38:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:53.608 06:38:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:53.608 06:38:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:53.608 06:38:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:53.608 06:38:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:53.608 06:38:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:53.608 06:38:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:53.608 06:38:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:53.608 06:38:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a30b9165-d26e-42a9-8b3c-daabdf272c4b 00:04:53.608 06:38:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=a30b9165-d26e-42a9-8b3c-daabdf272c4b 00:04:53.608 06:38:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:53.608 06:38:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:53.608 06:38:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:53.608 06:38:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:53.608 06:38:46 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:53.608 06:38:46 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:53.608 06:38:46 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:53.608 06:38:46 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:53.608 06:38:46 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:53.608 06:38:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.608 06:38:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.608 06:38:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.608 06:38:46 -- paths/export.sh@5 -- # export PATH 00:04:53.608 06:38:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.608 06:38:46 -- nvmf/common.sh@51 -- # : 0 00:04:53.608 06:38:46 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:53.608 06:38:46 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:53.608 06:38:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:53.608 06:38:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:53.608 06:38:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:53.608 06:38:46 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:53.608 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:53.608 06:38:46 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:53.608 06:38:46 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:53.608 06:38:46 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:53.608 06:38:46 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:53.608 06:38:46 -- spdk/autotest.sh@32 -- # uname -s 00:04:53.608 06:38:46 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:53.608 06:38:46 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:53.608 06:38:46 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:53.608 06:38:46 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:53.608 06:38:46 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:53.608 06:38:46 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:53.608 06:38:46 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:53.608 06:38:46 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:53.608 06:38:46 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:53.608 06:38:46 -- spdk/autotest.sh@48 -- # udevadm_pid=66503 00:04:53.608 06:38:46 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:53.608 06:38:46 -- pm/common@17 -- # local monitor 00:04:53.608 06:38:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:53.608 06:38:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:53.608 06:38:46 -- pm/common@25 -- # sleep 1 00:04:53.609 06:38:46 -- pm/common@21 -- # date +%s 00:04:53.609 06:38:46 -- pm/common@21 -- # date +%s 00:04:53.609 06:38:46 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731911926 00:04:53.609 06:38:46 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731911926 00:04:53.609 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731911926_collect-cpu-load.pm.log 00:04:53.609 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731911926_collect-vmstat.pm.log 00:04:54.994 06:38:47 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:54.994 06:38:47 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:54.994 06:38:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:54.994 06:38:47 -- common/autotest_common.sh@10 -- # set +x 00:04:54.994 06:38:47 -- spdk/autotest.sh@59 -- # create_test_list 00:04:54.994 06:38:47 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:54.994 06:38:47 -- common/autotest_common.sh@10 -- # set +x 00:04:54.994 06:38:47 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:54.994 06:38:47 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:54.994 06:38:47 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:54.994 06:38:47 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:54.994 06:38:47 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:54.994 06:38:47 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:54.994 06:38:47 -- common/autotest_common.sh@1457 -- # uname 00:04:54.994 06:38:47 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:54.994 06:38:47 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:54.994 06:38:47 -- common/autotest_common.sh@1477 -- # uname 00:04:54.994 06:38:47 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:54.994 06:38:47 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:54.994 06:38:47 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:54.994 lcov: LCOV version 1.15 00:04:54.994 06:38:47 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:09.923 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:09.923 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:24.822 06:39:17 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:24.822 06:39:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.822 06:39:17 -- common/autotest_common.sh@10 -- # set +x 00:05:24.822 06:39:17 -- spdk/autotest.sh@78 -- # rm -f 00:05:24.822 06:39:17 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:25.084 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:25.703 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:25.703 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:25.703 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:25.703 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:25.703 06:39:18 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:25.703 06:39:18 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:25.703 06:39:18 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:25.703 06:39:18 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:25.703 06:39:18 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:25.703 06:39:18 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:25.703 06:39:18 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:25.703 06:39:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:25.703 06:39:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:25.703 06:39:18 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:25.703 06:39:18 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:25.703 06:39:18 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:25.703 06:39:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:25.703 06:39:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:25.703 06:39:18 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:25.703 06:39:18 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:05:25.703 06:39:18 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:05:25.703 06:39:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:25.703 06:39:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:25.703 06:39:18 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:25.703 06:39:18 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:05:25.703 06:39:18 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:05:25.703 06:39:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:25.704 06:39:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:25.704 06:39:18 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:25.704 06:39:18 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:05:25.704 06:39:18 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:05:25.704 06:39:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:25.704 06:39:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:25.704 06:39:18 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:25.704 06:39:18 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:05:25.704 06:39:18 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:05:25.704 06:39:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:25.704 06:39:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:25.704 06:39:18 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:25.704 06:39:18 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:05:25.704 06:39:18 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:05:25.704 06:39:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:25.704 06:39:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:25.704 06:39:18 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:25.704 06:39:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:25.704 06:39:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:25.704 06:39:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:25.704 06:39:18 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:25.704 06:39:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:25.704 No valid GPT data, bailing 00:05:25.704 06:39:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:25.704 06:39:18 -- scripts/common.sh@394 -- # pt= 00:05:25.704 06:39:18 -- scripts/common.sh@395 -- # return 1 00:05:25.704 06:39:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:25.704 1+0 records in 00:05:25.704 1+0 records out 00:05:25.704 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0285362 s, 36.7 MB/s 00:05:25.704 06:39:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:25.704 06:39:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:25.704 06:39:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:25.704 06:39:18 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:25.704 06:39:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:25.982 No valid GPT data, bailing 00:05:25.982 06:39:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:25.982 06:39:18 -- scripts/common.sh@394 -- # pt= 00:05:25.982 06:39:18 -- scripts/common.sh@395 -- # return 1 00:05:25.982 06:39:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:25.982 1+0 records in 00:05:25.982 1+0 records out 00:05:25.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00632615 s, 166 MB/s 00:05:25.982 06:39:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:25.982 06:39:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:25.982 06:39:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:25.982 06:39:18 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:25.982 06:39:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:25.982 No valid GPT data, bailing 00:05:25.982 06:39:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:25.982 06:39:18 -- scripts/common.sh@394 -- # pt= 00:05:25.982 06:39:18 -- scripts/common.sh@395 -- # return 1 00:05:25.982 06:39:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:25.982 1+0 records in 00:05:25.982 1+0 records out 00:05:25.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00564442 s, 186 MB/s 00:05:25.982 06:39:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:25.982 06:39:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:25.982 06:39:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:25.982 06:39:18 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:25.982 06:39:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:25.982 No valid GPT data, bailing 00:05:25.982 06:39:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:25.982 06:39:19 -- scripts/common.sh@394 -- # pt= 00:05:25.982 06:39:19 -- scripts/common.sh@395 -- # return 1 00:05:25.982 06:39:19 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:25.982 1+0 records in 00:05:25.982 1+0 records out 00:05:25.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00588845 s, 178 MB/s 00:05:25.982 06:39:19 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:25.982 06:39:19 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:25.982 06:39:19 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:25.982 06:39:19 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:25.982 06:39:19 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:25.982 No valid GPT data, bailing 00:05:26.242 06:39:19 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:26.242 06:39:19 -- scripts/common.sh@394 -- # pt= 00:05:26.242 06:39:19 -- scripts/common.sh@395 -- # return 1 00:05:26.242 06:39:19 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:26.242 1+0 records in 00:05:26.242 1+0 records out 00:05:26.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00649922 s, 161 MB/s 00:05:26.242 06:39:19 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:26.242 06:39:19 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:26.242 06:39:19 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:26.242 06:39:19 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:26.242 06:39:19 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:26.242 No valid GPT data, bailing 00:05:26.242 06:39:19 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:26.242 06:39:19 -- scripts/common.sh@394 -- # pt= 00:05:26.242 06:39:19 -- scripts/common.sh@395 -- # return 1 00:05:26.242 06:39:19 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:26.242 1+0 records in 00:05:26.242 1+0 records out 00:05:26.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00485768 s, 216 MB/s 00:05:26.242 06:39:19 -- spdk/autotest.sh@105 -- # sync 00:05:26.242 06:39:19 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:26.242 06:39:19 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:26.242 06:39:19 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:28.157 06:39:20 -- spdk/autotest.sh@111 -- # uname -s 00:05:28.157 06:39:20 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:28.157 06:39:20 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:28.157 06:39:20 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:28.419 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:28.991 Hugepages 00:05:28.991 node hugesize free / total 00:05:28.991 node0 1048576kB 0 / 0 00:05:28.991 node0 2048kB 0 / 0 00:05:28.991 00:05:28.991 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:28.991 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:28.991 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:28.991 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:29.253 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:29.253 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:29.253 06:39:22 -- spdk/autotest.sh@117 -- # uname -s 00:05:29.253 06:39:22 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:29.253 06:39:22 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:29.253 06:39:22 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:29.828 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:30.395 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:30.395 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:30.395 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:30.395 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:30.395 06:39:23 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:31.329 06:39:24 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:31.329 06:39:24 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:31.329 06:39:24 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:31.329 06:39:24 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:31.329 06:39:24 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:31.329 06:39:24 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:31.329 06:39:24 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:31.329 06:39:24 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:31.329 06:39:24 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:31.329 06:39:24 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:31.329 06:39:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:31.329 06:39:24 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:31.587 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:31.845 Waiting for block devices as requested 00:05:31.845 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:31.845 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:32.104 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:32.104 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:37.370 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:37.370 06:39:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:37.370 06:39:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:37.370 06:39:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:37.370 06:39:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:37.370 06:39:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:37.370 06:39:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:37.370 06:39:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:37.370 06:39:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:37.370 06:39:30 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:37.370 06:39:30 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:37.370 06:39:30 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:37.370 06:39:30 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:37.370 06:39:30 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:37.370 06:39:30 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:37.370 06:39:30 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:37.370 06:39:30 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:37.370 06:39:30 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:37.370 06:39:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:37.370 06:39:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:37.370 06:39:30 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:37.370 06:39:30 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:37.370 06:39:30 -- common/autotest_common.sh@1543 -- # continue 00:05:37.370 06:39:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:37.370 06:39:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:37.370 06:39:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:37.370 06:39:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:37.370 06:39:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:37.370 06:39:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:37.370 06:39:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:37.370 06:39:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:37.370 06:39:30 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:37.370 06:39:30 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:37.370 06:39:30 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:37.370 06:39:30 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:37.370 06:39:30 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:37.370 06:39:30 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:37.370 06:39:30 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:37.370 06:39:30 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:37.370 06:39:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:37.370 06:39:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:37.370 06:39:30 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:37.370 06:39:30 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:37.370 06:39:30 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:37.370 06:39:30 -- common/autotest_common.sh@1543 -- # continue 00:05:37.370 06:39:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:37.370 06:39:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:37.370 06:39:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:37.370 06:39:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:05:37.370 06:39:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:37.370 06:39:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:37.370 06:39:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:37.370 06:39:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:37.370 06:39:30 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:05:37.370 06:39:30 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:05:37.370 06:39:30 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:37.370 06:39:30 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:05:37.370 06:39:30 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:37.370 06:39:30 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:37.370 06:39:30 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:37.370 06:39:30 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:37.370 06:39:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:37.370 06:39:30 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:37.370 06:39:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:37.370 06:39:30 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:37.370 06:39:30 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:37.370 06:39:30 -- common/autotest_common.sh@1543 -- # continue 00:05:37.370 06:39:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:37.370 06:39:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:37.370 06:39:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:37.370 06:39:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:05:37.370 06:39:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:37.370 06:39:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:37.370 06:39:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:37.370 06:39:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:37.370 06:39:30 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:05:37.370 06:39:30 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:05:37.370 06:39:30 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:37.370 06:39:30 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:05:37.370 06:39:30 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:37.370 06:39:30 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:37.370 06:39:30 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:37.370 06:39:30 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:37.370 06:39:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:37.370 06:39:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:37.371 06:39:30 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:37.371 06:39:30 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:37.371 06:39:30 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:37.371 06:39:30 -- common/autotest_common.sh@1543 -- # continue 00:05:37.371 06:39:30 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:37.371 06:39:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:37.371 06:39:30 -- common/autotest_common.sh@10 -- # set +x 00:05:37.371 06:39:30 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:37.371 06:39:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:37.371 06:39:30 -- common/autotest_common.sh@10 -- # set +x 00:05:37.371 06:39:30 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:37.629 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:38.196 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:38.196 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:38.196 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:38.196 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:38.196 06:39:31 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:38.196 06:39:31 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:38.196 06:39:31 -- common/autotest_common.sh@10 -- # set +x 00:05:38.196 06:39:31 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:38.196 06:39:31 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:38.196 06:39:31 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:38.196 06:39:31 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:38.196 06:39:31 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:38.196 06:39:31 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:38.196 06:39:31 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:38.196 06:39:31 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:38.196 06:39:31 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:38.196 06:39:31 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:38.196 06:39:31 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:38.196 06:39:31 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:38.196 06:39:31 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:38.196 06:39:31 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:38.196 06:39:31 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:38.196 06:39:31 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:38.196 06:39:31 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:38.196 06:39:31 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:38.196 06:39:31 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:38.196 06:39:31 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:38.196 06:39:31 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:38.196 06:39:31 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:38.196 06:39:31 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:38.196 06:39:31 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:38.196 06:39:31 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:38.196 06:39:31 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:38.196 06:39:31 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:38.196 06:39:31 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:38.196 06:39:31 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:38.196 06:39:31 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:38.196 06:39:31 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:38.196 06:39:31 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:38.196 06:39:31 -- common/autotest_common.sh@1572 -- # return 0 00:05:38.196 06:39:31 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:38.196 06:39:31 -- common/autotest_common.sh@1580 -- # return 0 00:05:38.196 06:39:31 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:38.196 06:39:31 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:38.196 06:39:31 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:38.196 06:39:31 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:38.196 06:39:31 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:38.196 06:39:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:38.196 06:39:31 -- common/autotest_common.sh@10 -- # set +x 00:05:38.196 06:39:31 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:38.196 06:39:31 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:38.196 06:39:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.196 06:39:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.196 06:39:31 -- common/autotest_common.sh@10 -- # set +x 00:05:38.196 ************************************ 00:05:38.196 START TEST env 00:05:38.196 ************************************ 00:05:38.196 06:39:31 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:38.454 * Looking for test storage... 00:05:38.454 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:38.454 06:39:31 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:38.454 06:39:31 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:38.454 06:39:31 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:38.454 06:39:31 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:38.454 06:39:31 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.454 06:39:31 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.454 06:39:31 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.454 06:39:31 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.454 06:39:31 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.454 06:39:31 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.454 06:39:31 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.454 06:39:31 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.454 06:39:31 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.454 06:39:31 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.454 06:39:31 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.454 06:39:31 env -- scripts/common.sh@344 -- # case "$op" in 00:05:38.454 06:39:31 env -- scripts/common.sh@345 -- # : 1 00:05:38.454 06:39:31 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.454 06:39:31 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.454 06:39:31 env -- scripts/common.sh@365 -- # decimal 1 00:05:38.454 06:39:31 env -- scripts/common.sh@353 -- # local d=1 00:05:38.454 06:39:31 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.454 06:39:31 env -- scripts/common.sh@355 -- # echo 1 00:05:38.455 06:39:31 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.455 06:39:31 env -- scripts/common.sh@366 -- # decimal 2 00:05:38.455 06:39:31 env -- scripts/common.sh@353 -- # local d=2 00:05:38.455 06:39:31 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.455 06:39:31 env -- scripts/common.sh@355 -- # echo 2 00:05:38.455 06:39:31 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.455 06:39:31 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.455 06:39:31 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.455 06:39:31 env -- scripts/common.sh@368 -- # return 0 00:05:38.455 06:39:31 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.455 06:39:31 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:38.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.455 --rc genhtml_branch_coverage=1 00:05:38.455 --rc genhtml_function_coverage=1 00:05:38.455 --rc genhtml_legend=1 00:05:38.455 --rc geninfo_all_blocks=1 00:05:38.455 --rc geninfo_unexecuted_blocks=1 00:05:38.455 00:05:38.455 ' 00:05:38.455 06:39:31 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:38.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.455 --rc genhtml_branch_coverage=1 00:05:38.455 --rc genhtml_function_coverage=1 00:05:38.455 --rc genhtml_legend=1 00:05:38.455 --rc geninfo_all_blocks=1 00:05:38.455 --rc geninfo_unexecuted_blocks=1 00:05:38.455 00:05:38.455 ' 00:05:38.455 06:39:31 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:38.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.455 --rc genhtml_branch_coverage=1 00:05:38.455 --rc genhtml_function_coverage=1 00:05:38.455 --rc genhtml_legend=1 00:05:38.455 --rc geninfo_all_blocks=1 00:05:38.455 --rc geninfo_unexecuted_blocks=1 00:05:38.455 00:05:38.455 ' 00:05:38.455 06:39:31 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:38.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.455 --rc genhtml_branch_coverage=1 00:05:38.455 --rc genhtml_function_coverage=1 00:05:38.455 --rc genhtml_legend=1 00:05:38.455 --rc geninfo_all_blocks=1 00:05:38.455 --rc geninfo_unexecuted_blocks=1 00:05:38.455 00:05:38.455 ' 00:05:38.455 06:39:31 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:38.455 06:39:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.455 06:39:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.455 06:39:31 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.455 ************************************ 00:05:38.455 START TEST env_memory 00:05:38.455 ************************************ 00:05:38.455 06:39:31 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:38.455 00:05:38.455 00:05:38.455 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.455 http://cunit.sourceforge.net/ 00:05:38.455 00:05:38.455 00:05:38.455 Suite: memory 00:05:38.455 Test: alloc and free memory map ...[2024-11-18 06:39:31.458140] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:38.455 passed 00:05:38.455 Test: mem map translation ...[2024-11-18 06:39:31.496939] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:38.455 [2024-11-18 06:39:31.497067] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:38.455 [2024-11-18 06:39:31.497182] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:38.455 [2024-11-18 06:39:31.497346] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:38.713 passed 00:05:38.713 Test: mem map registration ...[2024-11-18 06:39:31.568540] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:38.713 [2024-11-18 06:39:31.568646] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:38.713 passed 00:05:38.713 Test: mem map adjacent registrations ...passed 00:05:38.713 00:05:38.713 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.713 suites 1 1 n/a 0 0 00:05:38.713 tests 4 4 4 0 0 00:05:38.713 asserts 152 152 152 0 n/a 00:05:38.713 00:05:38.713 Elapsed time = 0.236 seconds 00:05:38.713 00:05:38.713 real 0m0.264s 00:05:38.713 user 0m0.239s 00:05:38.713 sys 0m0.015s 00:05:38.713 06:39:31 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.713 06:39:31 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:38.713 ************************************ 00:05:38.713 END TEST env_memory 00:05:38.713 ************************************ 00:05:38.713 06:39:31 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:38.713 06:39:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.713 06:39:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.713 06:39:31 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.713 ************************************ 00:05:38.713 START TEST env_vtophys 00:05:38.713 ************************************ 00:05:38.713 06:39:31 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:38.713 EAL: lib.eal log level changed from notice to debug 00:05:38.713 EAL: Detected lcore 0 as core 0 on socket 0 00:05:38.714 EAL: Detected lcore 1 as core 0 on socket 0 00:05:38.714 EAL: Detected lcore 2 as core 0 on socket 0 00:05:38.714 EAL: Detected lcore 3 as core 0 on socket 0 00:05:38.714 EAL: Detected lcore 4 as core 0 on socket 0 00:05:38.714 EAL: Detected lcore 5 as core 0 on socket 0 00:05:38.714 EAL: Detected lcore 6 as core 0 on socket 0 00:05:38.714 EAL: Detected lcore 7 as core 0 on socket 0 00:05:38.714 EAL: Detected lcore 8 as core 0 on socket 0 00:05:38.714 EAL: Detected lcore 9 as core 0 on socket 0 00:05:38.714 EAL: Maximum logical cores by configuration: 128 00:05:38.714 EAL: Detected CPU lcores: 10 00:05:38.714 EAL: Detected NUMA nodes: 1 00:05:38.714 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:38.714 EAL: Detected shared linkage of DPDK 00:05:38.714 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:38.714 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:38.714 EAL: Registered [vdev] bus. 00:05:38.714 EAL: bus.vdev log level changed from disabled to notice 00:05:38.714 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:38.714 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:38.714 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:38.714 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:38.714 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:38.714 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:38.714 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:38.714 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:38.714 EAL: No shared files mode enabled, IPC will be disabled 00:05:38.714 EAL: No shared files mode enabled, IPC is disabled 00:05:38.714 EAL: Selected IOVA mode 'PA' 00:05:38.714 EAL: Probing VFIO support... 00:05:38.714 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:38.714 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:38.714 EAL: Ask a virtual area of 0x2e000 bytes 00:05:38.714 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:38.714 EAL: Setting up physically contiguous memory... 00:05:38.714 EAL: Setting maximum number of open files to 524288 00:05:38.714 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:38.714 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:38.714 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.714 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:38.714 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:38.714 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.714 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:38.714 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:38.714 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.714 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:38.714 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:38.714 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.714 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:38.714 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:38.714 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.714 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:38.714 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:38.714 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.714 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:38.714 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:38.714 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.714 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:38.714 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:38.714 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.714 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:38.714 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:38.714 EAL: Hugepages will be freed exactly as allocated. 00:05:38.714 EAL: No shared files mode enabled, IPC is disabled 00:05:38.714 EAL: No shared files mode enabled, IPC is disabled 00:05:38.972 EAL: TSC frequency is ~2600000 KHz 00:05:38.972 EAL: Main lcore 0 is ready (tid=7f51fe815a40;cpuset=[0]) 00:05:38.972 EAL: Trying to obtain current memory policy. 00:05:38.972 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.972 EAL: Restoring previous memory policy: 0 00:05:38.972 EAL: request: mp_malloc_sync 00:05:38.972 EAL: No shared files mode enabled, IPC is disabled 00:05:38.972 EAL: Heap on socket 0 was expanded by 2MB 00:05:38.972 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:38.972 EAL: No shared files mode enabled, IPC is disabled 00:05:38.972 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:38.972 EAL: Mem event callback 'spdk:(nil)' registered 00:05:38.972 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:38.972 00:05:38.972 00:05:38.972 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.972 http://cunit.sourceforge.net/ 00:05:38.972 00:05:38.972 00:05:38.972 Suite: components_suite 00:05:39.231 Test: vtophys_malloc_test ...passed 00:05:39.231 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:39.231 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.231 EAL: Restoring previous memory policy: 4 00:05:39.231 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.231 EAL: request: mp_malloc_sync 00:05:39.231 EAL: No shared files mode enabled, IPC is disabled 00:05:39.231 EAL: Heap on socket 0 was expanded by 4MB 00:05:39.231 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.231 EAL: request: mp_malloc_sync 00:05:39.231 EAL: No shared files mode enabled, IPC is disabled 00:05:39.231 EAL: Heap on socket 0 was shrunk by 4MB 00:05:39.231 EAL: Trying to obtain current memory policy. 00:05:39.231 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.231 EAL: Restoring previous memory policy: 4 00:05:39.231 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.231 EAL: request: mp_malloc_sync 00:05:39.231 EAL: No shared files mode enabled, IPC is disabled 00:05:39.231 EAL: Heap on socket 0 was expanded by 6MB 00:05:39.231 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.231 EAL: request: mp_malloc_sync 00:05:39.231 EAL: No shared files mode enabled, IPC is disabled 00:05:39.231 EAL: Heap on socket 0 was shrunk by 6MB 00:05:39.231 EAL: Trying to obtain current memory policy. 00:05:39.231 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.231 EAL: Restoring previous memory policy: 4 00:05:39.231 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.231 EAL: request: mp_malloc_sync 00:05:39.231 EAL: No shared files mode enabled, IPC is disabled 00:05:39.231 EAL: Heap on socket 0 was expanded by 10MB 00:05:39.231 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.231 EAL: request: mp_malloc_sync 00:05:39.231 EAL: No shared files mode enabled, IPC is disabled 00:05:39.231 EAL: Heap on socket 0 was shrunk by 10MB 00:05:39.231 EAL: Trying to obtain current memory policy. 00:05:39.231 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.231 EAL: Restoring previous memory policy: 4 00:05:39.231 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.231 EAL: request: mp_malloc_sync 00:05:39.231 EAL: No shared files mode enabled, IPC is disabled 00:05:39.231 EAL: Heap on socket 0 was expanded by 18MB 00:05:39.231 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.231 EAL: request: mp_malloc_sync 00:05:39.231 EAL: No shared files mode enabled, IPC is disabled 00:05:39.231 EAL: Heap on socket 0 was shrunk by 18MB 00:05:39.231 EAL: Trying to obtain current memory policy. 00:05:39.231 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.231 EAL: Restoring previous memory policy: 4 00:05:39.231 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.231 EAL: request: mp_malloc_sync 00:05:39.231 EAL: No shared files mode enabled, IPC is disabled 00:05:39.231 EAL: Heap on socket 0 was expanded by 34MB 00:05:39.231 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.231 EAL: request: mp_malloc_sync 00:05:39.231 EAL: No shared files mode enabled, IPC is disabled 00:05:39.231 EAL: Heap on socket 0 was shrunk by 34MB 00:05:39.231 EAL: Trying to obtain current memory policy. 00:05:39.231 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.231 EAL: Restoring previous memory policy: 4 00:05:39.231 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.231 EAL: request: mp_malloc_sync 00:05:39.231 EAL: No shared files mode enabled, IPC is disabled 00:05:39.231 EAL: Heap on socket 0 was expanded by 66MB 00:05:39.231 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.231 EAL: request: mp_malloc_sync 00:05:39.231 EAL: No shared files mode enabled, IPC is disabled 00:05:39.231 EAL: Heap on socket 0 was shrunk by 66MB 00:05:39.231 EAL: Trying to obtain current memory policy. 00:05:39.231 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.231 EAL: Restoring previous memory policy: 4 00:05:39.231 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.231 EAL: request: mp_malloc_sync 00:05:39.231 EAL: No shared files mode enabled, IPC is disabled 00:05:39.231 EAL: Heap on socket 0 was expanded by 130MB 00:05:39.231 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.231 EAL: request: mp_malloc_sync 00:05:39.231 EAL: No shared files mode enabled, IPC is disabled 00:05:39.231 EAL: Heap on socket 0 was shrunk by 130MB 00:05:39.231 EAL: Trying to obtain current memory policy. 00:05:39.231 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.231 EAL: Restoring previous memory policy: 4 00:05:39.231 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.231 EAL: request: mp_malloc_sync 00:05:39.231 EAL: No shared files mode enabled, IPC is disabled 00:05:39.231 EAL: Heap on socket 0 was expanded by 258MB 00:05:39.231 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.489 EAL: request: mp_malloc_sync 00:05:39.489 EAL: No shared files mode enabled, IPC is disabled 00:05:39.489 EAL: Heap on socket 0 was shrunk by 258MB 00:05:39.489 EAL: Trying to obtain current memory policy. 00:05:39.489 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.489 EAL: Restoring previous memory policy: 4 00:05:39.489 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.489 EAL: request: mp_malloc_sync 00:05:39.489 EAL: No shared files mode enabled, IPC is disabled 00:05:39.489 EAL: Heap on socket 0 was expanded by 514MB 00:05:39.489 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.489 EAL: request: mp_malloc_sync 00:05:39.489 EAL: No shared files mode enabled, IPC is disabled 00:05:39.489 EAL: Heap on socket 0 was shrunk by 514MB 00:05:39.489 EAL: Trying to obtain current memory policy. 00:05:39.489 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.747 EAL: Restoring previous memory policy: 4 00:05:39.747 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.747 EAL: request: mp_malloc_sync 00:05:39.747 EAL: No shared files mode enabled, IPC is disabled 00:05:39.747 EAL: Heap on socket 0 was expanded by 1026MB 00:05:39.747 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.006 passed 00:05:40.006 00:05:40.006 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.006 suites 1 1 n/a 0 0 00:05:40.006 tests 2 2 2 0 0 00:05:40.006 asserts 5337 5337 5337 0 n/a 00:05:40.006 00:05:40.006 Elapsed time = 0.927 seconds 00:05:40.006 EAL: request: mp_malloc_sync 00:05:40.006 EAL: No shared files mode enabled, IPC is disabled 00:05:40.006 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:40.006 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.006 EAL: request: mp_malloc_sync 00:05:40.006 EAL: No shared files mode enabled, IPC is disabled 00:05:40.006 EAL: Heap on socket 0 was shrunk by 2MB 00:05:40.006 EAL: No shared files mode enabled, IPC is disabled 00:05:40.006 EAL: No shared files mode enabled, IPC is disabled 00:05:40.006 EAL: No shared files mode enabled, IPC is disabled 00:05:40.006 00:05:40.006 real 0m1.153s 00:05:40.006 user 0m0.472s 00:05:40.006 sys 0m0.552s 00:05:40.006 ************************************ 00:05:40.006 END TEST env_vtophys 00:05:40.006 ************************************ 00:05:40.006 06:39:32 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.006 06:39:32 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:40.006 06:39:32 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:40.006 06:39:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.006 06:39:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.006 06:39:32 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.006 ************************************ 00:05:40.006 START TEST env_pci 00:05:40.006 ************************************ 00:05:40.006 06:39:32 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:40.006 00:05:40.006 00:05:40.006 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.006 http://cunit.sourceforge.net/ 00:05:40.006 00:05:40.006 00:05:40.006 Suite: pci 00:05:40.007 Test: pci_hook ...[2024-11-18 06:39:32.924512] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69256 has claimed it 00:05:40.007 passed 00:05:40.007 00:05:40.007 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.007 suites 1 1 n/a 0 0 00:05:40.007 tests 1 1 1 0 0 00:05:40.007 asserts 25 25 25 0 n/a 00:05:40.007 00:05:40.007 Elapsed time = 0.005 seconds 00:05:40.007 EAL: Cannot find device (10000:00:01.0) 00:05:40.007 EAL: Failed to attach device on primary process 00:05:40.007 00:05:40.007 real 0m0.059s 00:05:40.007 user 0m0.022s 00:05:40.007 sys 0m0.037s 00:05:40.007 06:39:32 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.007 ************************************ 00:05:40.007 END TEST env_pci 00:05:40.007 ************************************ 00:05:40.007 06:39:32 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:40.007 06:39:32 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:40.007 06:39:32 env -- env/env.sh@15 -- # uname 00:05:40.007 06:39:32 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:40.007 06:39:32 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:40.007 06:39:32 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:40.007 06:39:32 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:40.007 06:39:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.007 06:39:32 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.007 ************************************ 00:05:40.007 START TEST env_dpdk_post_init 00:05:40.007 ************************************ 00:05:40.007 06:39:33 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:40.007 EAL: Detected CPU lcores: 10 00:05:40.007 EAL: Detected NUMA nodes: 1 00:05:40.007 EAL: Detected shared linkage of DPDK 00:05:40.007 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:40.007 EAL: Selected IOVA mode 'PA' 00:05:40.266 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:40.266 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:40.266 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:40.266 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:40.266 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:40.266 Starting DPDK initialization... 00:05:40.266 Starting SPDK post initialization... 00:05:40.266 SPDK NVMe probe 00:05:40.266 Attaching to 0000:00:10.0 00:05:40.266 Attaching to 0000:00:11.0 00:05:40.266 Attaching to 0000:00:12.0 00:05:40.266 Attaching to 0000:00:13.0 00:05:40.266 Attached to 0000:00:10.0 00:05:40.266 Attached to 0000:00:11.0 00:05:40.266 Attached to 0000:00:13.0 00:05:40.266 Attached to 0000:00:12.0 00:05:40.266 Cleaning up... 00:05:40.266 ************************************ 00:05:40.266 END TEST env_dpdk_post_init 00:05:40.266 ************************************ 00:05:40.266 00:05:40.266 real 0m0.225s 00:05:40.266 user 0m0.068s 00:05:40.266 sys 0m0.056s 00:05:40.266 06:39:33 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.266 06:39:33 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:40.266 06:39:33 env -- env/env.sh@26 -- # uname 00:05:40.266 06:39:33 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:40.266 06:39:33 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:40.266 06:39:33 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.266 06:39:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.266 06:39:33 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.266 ************************************ 00:05:40.266 START TEST env_mem_callbacks 00:05:40.266 ************************************ 00:05:40.266 06:39:33 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:40.266 EAL: Detected CPU lcores: 10 00:05:40.266 EAL: Detected NUMA nodes: 1 00:05:40.266 EAL: Detected shared linkage of DPDK 00:05:40.266 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:40.266 EAL: Selected IOVA mode 'PA' 00:05:40.524 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:40.524 00:05:40.524 00:05:40.524 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.524 http://cunit.sourceforge.net/ 00:05:40.524 00:05:40.524 00:05:40.524 Suite: memory 00:05:40.524 Test: test ... 00:05:40.524 register 0x200000200000 2097152 00:05:40.524 malloc 3145728 00:05:40.524 register 0x200000400000 4194304 00:05:40.524 buf 0x200000500000 len 3145728 PASSED 00:05:40.524 malloc 64 00:05:40.524 buf 0x2000004fff40 len 64 PASSED 00:05:40.524 malloc 4194304 00:05:40.524 register 0x200000800000 6291456 00:05:40.524 buf 0x200000a00000 len 4194304 PASSED 00:05:40.524 free 0x200000500000 3145728 00:05:40.524 free 0x2000004fff40 64 00:05:40.524 unregister 0x200000400000 4194304 PASSED 00:05:40.524 free 0x200000a00000 4194304 00:05:40.524 unregister 0x200000800000 6291456 PASSED 00:05:40.524 malloc 8388608 00:05:40.524 register 0x200000400000 10485760 00:05:40.524 buf 0x200000600000 len 8388608 PASSED 00:05:40.524 free 0x200000600000 8388608 00:05:40.524 unregister 0x200000400000 10485760 PASSED 00:05:40.524 passed 00:05:40.524 00:05:40.524 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.524 suites 1 1 n/a 0 0 00:05:40.524 tests 1 1 1 0 0 00:05:40.524 asserts 15 15 15 0 n/a 00:05:40.524 00:05:40.524 Elapsed time = 0.009 seconds 00:05:40.524 00:05:40.524 real 0m0.179s 00:05:40.524 user 0m0.023s 00:05:40.524 sys 0m0.051s 00:05:40.524 06:39:33 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.524 06:39:33 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:40.524 ************************************ 00:05:40.524 END TEST env_mem_callbacks 00:05:40.524 ************************************ 00:05:40.524 00:05:40.524 real 0m2.220s 00:05:40.524 user 0m0.978s 00:05:40.524 sys 0m0.898s 00:05:40.524 06:39:33 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.524 06:39:33 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.524 ************************************ 00:05:40.524 END TEST env 00:05:40.524 ************************************ 00:05:40.524 06:39:33 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:40.524 06:39:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.524 06:39:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.524 06:39:33 -- common/autotest_common.sh@10 -- # set +x 00:05:40.524 ************************************ 00:05:40.524 START TEST rpc 00:05:40.524 ************************************ 00:05:40.524 06:39:33 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:40.524 * Looking for test storage... 00:05:40.524 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:40.524 06:39:33 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:40.524 06:39:33 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:40.524 06:39:33 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:40.783 06:39:33 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:40.783 06:39:33 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.783 06:39:33 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.783 06:39:33 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.783 06:39:33 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.783 06:39:33 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.783 06:39:33 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.783 06:39:33 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.783 06:39:33 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.783 06:39:33 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.783 06:39:33 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.783 06:39:33 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.783 06:39:33 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:40.783 06:39:33 rpc -- scripts/common.sh@345 -- # : 1 00:05:40.783 06:39:33 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.783 06:39:33 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.783 06:39:33 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:40.783 06:39:33 rpc -- scripts/common.sh@353 -- # local d=1 00:05:40.783 06:39:33 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.783 06:39:33 rpc -- scripts/common.sh@355 -- # echo 1 00:05:40.783 06:39:33 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.783 06:39:33 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:40.783 06:39:33 rpc -- scripts/common.sh@353 -- # local d=2 00:05:40.783 06:39:33 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.783 06:39:33 rpc -- scripts/common.sh@355 -- # echo 2 00:05:40.783 06:39:33 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.783 06:39:33 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.783 06:39:33 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.783 06:39:33 rpc -- scripts/common.sh@368 -- # return 0 00:05:40.783 06:39:33 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.783 06:39:33 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:40.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.783 --rc genhtml_branch_coverage=1 00:05:40.783 --rc genhtml_function_coverage=1 00:05:40.783 --rc genhtml_legend=1 00:05:40.783 --rc geninfo_all_blocks=1 00:05:40.783 --rc geninfo_unexecuted_blocks=1 00:05:40.783 00:05:40.783 ' 00:05:40.783 06:39:33 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:40.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.783 --rc genhtml_branch_coverage=1 00:05:40.783 --rc genhtml_function_coverage=1 00:05:40.783 --rc genhtml_legend=1 00:05:40.783 --rc geninfo_all_blocks=1 00:05:40.783 --rc geninfo_unexecuted_blocks=1 00:05:40.783 00:05:40.783 ' 00:05:40.783 06:39:33 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:40.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.783 --rc genhtml_branch_coverage=1 00:05:40.783 --rc genhtml_function_coverage=1 00:05:40.783 --rc genhtml_legend=1 00:05:40.783 --rc geninfo_all_blocks=1 00:05:40.783 --rc geninfo_unexecuted_blocks=1 00:05:40.783 00:05:40.783 ' 00:05:40.783 06:39:33 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:40.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.783 --rc genhtml_branch_coverage=1 00:05:40.783 --rc genhtml_function_coverage=1 00:05:40.783 --rc genhtml_legend=1 00:05:40.783 --rc geninfo_all_blocks=1 00:05:40.783 --rc geninfo_unexecuted_blocks=1 00:05:40.783 00:05:40.783 ' 00:05:40.783 06:39:33 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69377 00:05:40.783 06:39:33 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:40.783 06:39:33 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.783 06:39:33 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69377 00:05:40.783 06:39:33 rpc -- common/autotest_common.sh@835 -- # '[' -z 69377 ']' 00:05:40.783 06:39:33 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.783 06:39:33 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.783 06:39:33 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.783 06:39:33 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.783 06:39:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.783 [2024-11-18 06:39:33.717330] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:05:40.783 [2024-11-18 06:39:33.717816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69377 ] 00:05:41.041 [2024-11-18 06:39:33.870080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.041 [2024-11-18 06:39:33.888520] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:41.041 [2024-11-18 06:39:33.888558] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69377' to capture a snapshot of events at runtime. 00:05:41.041 [2024-11-18 06:39:33.888570] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:41.041 [2024-11-18 06:39:33.888578] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:41.041 [2024-11-18 06:39:33.888586] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69377 for offline analysis/debug. 00:05:41.041 [2024-11-18 06:39:33.888881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.607 06:39:34 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.607 06:39:34 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:41.607 06:39:34 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:41.607 06:39:34 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:41.607 06:39:34 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:41.607 06:39:34 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:41.607 06:39:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.607 06:39:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.607 06:39:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.607 ************************************ 00:05:41.607 START TEST rpc_integrity 00:05:41.607 ************************************ 00:05:41.607 06:39:34 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:41.607 06:39:34 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:41.607 06:39:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.607 06:39:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.607 06:39:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.607 06:39:34 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:41.607 06:39:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:41.607 06:39:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:41.607 06:39:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:41.607 06:39:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.607 06:39:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.607 06:39:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.607 06:39:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:41.607 06:39:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:41.607 06:39:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.607 06:39:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.607 06:39:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.607 06:39:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:41.607 { 00:05:41.607 "name": "Malloc0", 00:05:41.607 "aliases": [ 00:05:41.607 "cdb3faea-5d4f-4896-9c7e-baf777390214" 00:05:41.607 ], 00:05:41.607 "product_name": "Malloc disk", 00:05:41.607 "block_size": 512, 00:05:41.607 "num_blocks": 16384, 00:05:41.607 "uuid": "cdb3faea-5d4f-4896-9c7e-baf777390214", 00:05:41.607 "assigned_rate_limits": { 00:05:41.607 "rw_ios_per_sec": 0, 00:05:41.607 "rw_mbytes_per_sec": 0, 00:05:41.607 "r_mbytes_per_sec": 0, 00:05:41.608 "w_mbytes_per_sec": 0 00:05:41.608 }, 00:05:41.608 "claimed": false, 00:05:41.608 "zoned": false, 00:05:41.608 "supported_io_types": { 00:05:41.608 "read": true, 00:05:41.608 "write": true, 00:05:41.608 "unmap": true, 00:05:41.608 "flush": true, 00:05:41.608 "reset": true, 00:05:41.608 "nvme_admin": false, 00:05:41.608 "nvme_io": false, 00:05:41.608 "nvme_io_md": false, 00:05:41.608 "write_zeroes": true, 00:05:41.608 "zcopy": true, 00:05:41.608 "get_zone_info": false, 00:05:41.608 "zone_management": false, 00:05:41.608 "zone_append": false, 00:05:41.608 "compare": false, 00:05:41.608 "compare_and_write": false, 00:05:41.608 "abort": true, 00:05:41.608 "seek_hole": false, 00:05:41.608 "seek_data": false, 00:05:41.608 "copy": true, 00:05:41.608 "nvme_iov_md": false 00:05:41.608 }, 00:05:41.608 "memory_domains": [ 00:05:41.608 { 00:05:41.608 "dma_device_id": "system", 00:05:41.608 "dma_device_type": 1 00:05:41.608 }, 00:05:41.608 { 00:05:41.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.608 "dma_device_type": 2 00:05:41.608 } 00:05:41.608 ], 00:05:41.608 "driver_specific": {} 00:05:41.608 } 00:05:41.608 ]' 00:05:41.608 06:39:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:41.608 06:39:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:41.608 06:39:34 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:41.608 06:39:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.608 06:39:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.608 [2024-11-18 06:39:34.678731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:41.608 [2024-11-18 06:39:34.678784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:41.608 [2024-11-18 06:39:34.678810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:41.608 [2024-11-18 06:39:34.678820] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:41.608 [2024-11-18 06:39:34.681138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:41.608 [2024-11-18 06:39:34.681175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:41.608 Passthru0 00:05:41.608 06:39:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.608 06:39:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:41.608 06:39:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.608 06:39:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.866 06:39:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.866 06:39:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:41.866 { 00:05:41.866 "name": "Malloc0", 00:05:41.866 "aliases": [ 00:05:41.866 "cdb3faea-5d4f-4896-9c7e-baf777390214" 00:05:41.866 ], 00:05:41.866 "product_name": "Malloc disk", 00:05:41.866 "block_size": 512, 00:05:41.866 "num_blocks": 16384, 00:05:41.866 "uuid": "cdb3faea-5d4f-4896-9c7e-baf777390214", 00:05:41.866 "assigned_rate_limits": { 00:05:41.866 "rw_ios_per_sec": 0, 00:05:41.867 "rw_mbytes_per_sec": 0, 00:05:41.867 "r_mbytes_per_sec": 0, 00:05:41.867 "w_mbytes_per_sec": 0 00:05:41.867 }, 00:05:41.867 "claimed": true, 00:05:41.867 "claim_type": "exclusive_write", 00:05:41.867 "zoned": false, 00:05:41.867 "supported_io_types": { 00:05:41.867 "read": true, 00:05:41.867 "write": true, 00:05:41.867 "unmap": true, 00:05:41.867 "flush": true, 00:05:41.867 "reset": true, 00:05:41.867 "nvme_admin": false, 00:05:41.867 "nvme_io": false, 00:05:41.867 "nvme_io_md": false, 00:05:41.867 "write_zeroes": true, 00:05:41.867 "zcopy": true, 00:05:41.867 "get_zone_info": false, 00:05:41.867 "zone_management": false, 00:05:41.867 "zone_append": false, 00:05:41.867 "compare": false, 00:05:41.867 "compare_and_write": false, 00:05:41.867 "abort": true, 00:05:41.867 "seek_hole": false, 00:05:41.867 "seek_data": false, 00:05:41.867 "copy": true, 00:05:41.867 "nvme_iov_md": false 00:05:41.867 }, 00:05:41.867 "memory_domains": [ 00:05:41.867 { 00:05:41.867 "dma_device_id": "system", 00:05:41.867 "dma_device_type": 1 00:05:41.867 }, 00:05:41.867 { 00:05:41.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.867 "dma_device_type": 2 00:05:41.867 } 00:05:41.867 ], 00:05:41.867 "driver_specific": {} 00:05:41.867 }, 00:05:41.867 { 00:05:41.867 "name": "Passthru0", 00:05:41.867 "aliases": [ 00:05:41.867 "4548a1ee-f00d-5c80-ac8c-945a1fe4169c" 00:05:41.867 ], 00:05:41.867 "product_name": "passthru", 00:05:41.867 "block_size": 512, 00:05:41.867 "num_blocks": 16384, 00:05:41.867 "uuid": "4548a1ee-f00d-5c80-ac8c-945a1fe4169c", 00:05:41.867 "assigned_rate_limits": { 00:05:41.867 "rw_ios_per_sec": 0, 00:05:41.867 "rw_mbytes_per_sec": 0, 00:05:41.867 "r_mbytes_per_sec": 0, 00:05:41.867 "w_mbytes_per_sec": 0 00:05:41.867 }, 00:05:41.867 "claimed": false, 00:05:41.867 "zoned": false, 00:05:41.867 "supported_io_types": { 00:05:41.867 "read": true, 00:05:41.867 "write": true, 00:05:41.867 "unmap": true, 00:05:41.867 "flush": true, 00:05:41.867 "reset": true, 00:05:41.867 "nvme_admin": false, 00:05:41.867 "nvme_io": false, 00:05:41.867 "nvme_io_md": false, 00:05:41.867 "write_zeroes": true, 00:05:41.867 "zcopy": true, 00:05:41.867 "get_zone_info": false, 00:05:41.867 "zone_management": false, 00:05:41.867 "zone_append": false, 00:05:41.867 "compare": false, 00:05:41.867 "compare_and_write": false, 00:05:41.867 "abort": true, 00:05:41.867 "seek_hole": false, 00:05:41.867 "seek_data": false, 00:05:41.867 "copy": true, 00:05:41.867 "nvme_iov_md": false 00:05:41.867 }, 00:05:41.867 "memory_domains": [ 00:05:41.867 { 00:05:41.867 "dma_device_id": "system", 00:05:41.867 "dma_device_type": 1 00:05:41.867 }, 00:05:41.867 { 00:05:41.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.867 "dma_device_type": 2 00:05:41.867 } 00:05:41.867 ], 00:05:41.867 "driver_specific": { 00:05:41.867 "passthru": { 00:05:41.867 "name": "Passthru0", 00:05:41.867 "base_bdev_name": "Malloc0" 00:05:41.867 } 00:05:41.867 } 00:05:41.867 } 00:05:41.867 ]' 00:05:41.867 06:39:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:41.867 06:39:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:41.867 06:39:34 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:41.867 06:39:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.867 06:39:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.867 06:39:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.867 06:39:34 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:41.867 06:39:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.867 06:39:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.867 06:39:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.867 06:39:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:41.867 06:39:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.867 06:39:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.867 06:39:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.867 06:39:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:41.867 06:39:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:41.867 ************************************ 00:05:41.867 END TEST rpc_integrity 00:05:41.867 ************************************ 00:05:41.867 06:39:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:41.867 00:05:41.867 real 0m0.233s 00:05:41.867 user 0m0.128s 00:05:41.867 sys 0m0.042s 00:05:41.867 06:39:34 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.867 06:39:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.867 06:39:34 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:41.867 06:39:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.867 06:39:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.867 06:39:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.867 ************************************ 00:05:41.867 START TEST rpc_plugins 00:05:41.867 ************************************ 00:05:41.867 06:39:34 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:41.867 06:39:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:41.867 06:39:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.867 06:39:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.867 06:39:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.867 06:39:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:41.867 06:39:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:41.867 06:39:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.867 06:39:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.867 06:39:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.867 06:39:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:41.867 { 00:05:41.867 "name": "Malloc1", 00:05:41.867 "aliases": [ 00:05:41.867 "42fc9dca-ab9f-4cf8-8447-6349e76a575f" 00:05:41.867 ], 00:05:41.867 "product_name": "Malloc disk", 00:05:41.867 "block_size": 4096, 00:05:41.867 "num_blocks": 256, 00:05:41.867 "uuid": "42fc9dca-ab9f-4cf8-8447-6349e76a575f", 00:05:41.867 "assigned_rate_limits": { 00:05:41.867 "rw_ios_per_sec": 0, 00:05:41.867 "rw_mbytes_per_sec": 0, 00:05:41.867 "r_mbytes_per_sec": 0, 00:05:41.867 "w_mbytes_per_sec": 0 00:05:41.867 }, 00:05:41.867 "claimed": false, 00:05:41.867 "zoned": false, 00:05:41.867 "supported_io_types": { 00:05:41.867 "read": true, 00:05:41.867 "write": true, 00:05:41.867 "unmap": true, 00:05:41.867 "flush": true, 00:05:41.867 "reset": true, 00:05:41.867 "nvme_admin": false, 00:05:41.867 "nvme_io": false, 00:05:41.867 "nvme_io_md": false, 00:05:41.867 "write_zeroes": true, 00:05:41.867 "zcopy": true, 00:05:41.867 "get_zone_info": false, 00:05:41.867 "zone_management": false, 00:05:41.867 "zone_append": false, 00:05:41.867 "compare": false, 00:05:41.867 "compare_and_write": false, 00:05:41.867 "abort": true, 00:05:41.867 "seek_hole": false, 00:05:41.867 "seek_data": false, 00:05:41.867 "copy": true, 00:05:41.867 "nvme_iov_md": false 00:05:41.867 }, 00:05:41.867 "memory_domains": [ 00:05:41.867 { 00:05:41.867 "dma_device_id": "system", 00:05:41.867 "dma_device_type": 1 00:05:41.867 }, 00:05:41.867 { 00:05:41.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.867 "dma_device_type": 2 00:05:41.867 } 00:05:41.867 ], 00:05:41.867 "driver_specific": {} 00:05:41.867 } 00:05:41.867 ]' 00:05:41.867 06:39:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:41.867 06:39:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:41.867 06:39:34 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:41.867 06:39:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.867 06:39:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.867 06:39:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.867 06:39:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:41.867 06:39:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.867 06:39:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.867 06:39:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.867 06:39:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:41.867 06:39:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:42.126 ************************************ 00:05:42.126 END TEST rpc_plugins 00:05:42.126 ************************************ 00:05:42.126 06:39:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:42.126 00:05:42.126 real 0m0.111s 00:05:42.126 user 0m0.065s 00:05:42.126 sys 0m0.013s 00:05:42.126 06:39:34 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.126 06:39:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.126 06:39:35 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:42.126 06:39:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.126 06:39:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.126 06:39:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.126 ************************************ 00:05:42.126 START TEST rpc_trace_cmd_test 00:05:42.126 ************************************ 00:05:42.126 06:39:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:42.126 06:39:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:42.126 06:39:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:42.126 06:39:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.126 06:39:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.126 06:39:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.126 06:39:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:42.126 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69377", 00:05:42.126 "tpoint_group_mask": "0x8", 00:05:42.126 "iscsi_conn": { 00:05:42.126 "mask": "0x2", 00:05:42.126 "tpoint_mask": "0x0" 00:05:42.126 }, 00:05:42.126 "scsi": { 00:05:42.126 "mask": "0x4", 00:05:42.126 "tpoint_mask": "0x0" 00:05:42.126 }, 00:05:42.126 "bdev": { 00:05:42.126 "mask": "0x8", 00:05:42.126 "tpoint_mask": "0xffffffffffffffff" 00:05:42.126 }, 00:05:42.126 "nvmf_rdma": { 00:05:42.126 "mask": "0x10", 00:05:42.126 "tpoint_mask": "0x0" 00:05:42.126 }, 00:05:42.126 "nvmf_tcp": { 00:05:42.126 "mask": "0x20", 00:05:42.126 "tpoint_mask": "0x0" 00:05:42.126 }, 00:05:42.126 "ftl": { 00:05:42.126 "mask": "0x40", 00:05:42.126 "tpoint_mask": "0x0" 00:05:42.126 }, 00:05:42.126 "blobfs": { 00:05:42.126 "mask": "0x80", 00:05:42.126 "tpoint_mask": "0x0" 00:05:42.126 }, 00:05:42.126 "dsa": { 00:05:42.126 "mask": "0x200", 00:05:42.126 "tpoint_mask": "0x0" 00:05:42.126 }, 00:05:42.126 "thread": { 00:05:42.126 "mask": "0x400", 00:05:42.126 "tpoint_mask": "0x0" 00:05:42.126 }, 00:05:42.126 "nvme_pcie": { 00:05:42.126 "mask": "0x800", 00:05:42.126 "tpoint_mask": "0x0" 00:05:42.126 }, 00:05:42.126 "iaa": { 00:05:42.126 "mask": "0x1000", 00:05:42.126 "tpoint_mask": "0x0" 00:05:42.126 }, 00:05:42.126 "nvme_tcp": { 00:05:42.126 "mask": "0x2000", 00:05:42.126 "tpoint_mask": "0x0" 00:05:42.126 }, 00:05:42.126 "bdev_nvme": { 00:05:42.126 "mask": "0x4000", 00:05:42.126 "tpoint_mask": "0x0" 00:05:42.126 }, 00:05:42.126 "sock": { 00:05:42.126 "mask": "0x8000", 00:05:42.126 "tpoint_mask": "0x0" 00:05:42.126 }, 00:05:42.126 "blob": { 00:05:42.126 "mask": "0x10000", 00:05:42.126 "tpoint_mask": "0x0" 00:05:42.126 }, 00:05:42.126 "bdev_raid": { 00:05:42.126 "mask": "0x20000", 00:05:42.126 "tpoint_mask": "0x0" 00:05:42.126 }, 00:05:42.126 "scheduler": { 00:05:42.126 "mask": "0x40000", 00:05:42.126 "tpoint_mask": "0x0" 00:05:42.126 } 00:05:42.126 }' 00:05:42.126 06:39:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:42.126 06:39:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:42.127 06:39:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:42.127 06:39:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:42.127 06:39:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:42.127 06:39:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:42.127 06:39:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:42.127 06:39:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:42.127 06:39:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:42.127 ************************************ 00:05:42.127 END TEST rpc_trace_cmd_test 00:05:42.127 ************************************ 00:05:42.127 06:39:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:42.127 00:05:42.127 real 0m0.173s 00:05:42.127 user 0m0.141s 00:05:42.127 sys 0m0.021s 00:05:42.127 06:39:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.127 06:39:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.453 06:39:35 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:42.454 06:39:35 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:42.454 06:39:35 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:42.454 06:39:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.454 06:39:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.454 06:39:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.454 ************************************ 00:05:42.454 START TEST rpc_daemon_integrity 00:05:42.454 ************************************ 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:42.454 { 00:05:42.454 "name": "Malloc2", 00:05:42.454 "aliases": [ 00:05:42.454 "2341353b-0881-423b-abea-1cb00917134a" 00:05:42.454 ], 00:05:42.454 "product_name": "Malloc disk", 00:05:42.454 "block_size": 512, 00:05:42.454 "num_blocks": 16384, 00:05:42.454 "uuid": "2341353b-0881-423b-abea-1cb00917134a", 00:05:42.454 "assigned_rate_limits": { 00:05:42.454 "rw_ios_per_sec": 0, 00:05:42.454 "rw_mbytes_per_sec": 0, 00:05:42.454 "r_mbytes_per_sec": 0, 00:05:42.454 "w_mbytes_per_sec": 0 00:05:42.454 }, 00:05:42.454 "claimed": false, 00:05:42.454 "zoned": false, 00:05:42.454 "supported_io_types": { 00:05:42.454 "read": true, 00:05:42.454 "write": true, 00:05:42.454 "unmap": true, 00:05:42.454 "flush": true, 00:05:42.454 "reset": true, 00:05:42.454 "nvme_admin": false, 00:05:42.454 "nvme_io": false, 00:05:42.454 "nvme_io_md": false, 00:05:42.454 "write_zeroes": true, 00:05:42.454 "zcopy": true, 00:05:42.454 "get_zone_info": false, 00:05:42.454 "zone_management": false, 00:05:42.454 "zone_append": false, 00:05:42.454 "compare": false, 00:05:42.454 "compare_and_write": false, 00:05:42.454 "abort": true, 00:05:42.454 "seek_hole": false, 00:05:42.454 "seek_data": false, 00:05:42.454 "copy": true, 00:05:42.454 "nvme_iov_md": false 00:05:42.454 }, 00:05:42.454 "memory_domains": [ 00:05:42.454 { 00:05:42.454 "dma_device_id": "system", 00:05:42.454 "dma_device_type": 1 00:05:42.454 }, 00:05:42.454 { 00:05:42.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.454 "dma_device_type": 2 00:05:42.454 } 00:05:42.454 ], 00:05:42.454 "driver_specific": {} 00:05:42.454 } 00:05:42.454 ]' 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.454 [2024-11-18 06:39:35.355055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:42.454 [2024-11-18 06:39:35.355105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:42.454 [2024-11-18 06:39:35.355130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:42.454 [2024-11-18 06:39:35.355139] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:42.454 [2024-11-18 06:39:35.357359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:42.454 [2024-11-18 06:39:35.357488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:42.454 Passthru0 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:42.454 { 00:05:42.454 "name": "Malloc2", 00:05:42.454 "aliases": [ 00:05:42.454 "2341353b-0881-423b-abea-1cb00917134a" 00:05:42.454 ], 00:05:42.454 "product_name": "Malloc disk", 00:05:42.454 "block_size": 512, 00:05:42.454 "num_blocks": 16384, 00:05:42.454 "uuid": "2341353b-0881-423b-abea-1cb00917134a", 00:05:42.454 "assigned_rate_limits": { 00:05:42.454 "rw_ios_per_sec": 0, 00:05:42.454 "rw_mbytes_per_sec": 0, 00:05:42.454 "r_mbytes_per_sec": 0, 00:05:42.454 "w_mbytes_per_sec": 0 00:05:42.454 }, 00:05:42.454 "claimed": true, 00:05:42.454 "claim_type": "exclusive_write", 00:05:42.454 "zoned": false, 00:05:42.454 "supported_io_types": { 00:05:42.454 "read": true, 00:05:42.454 "write": true, 00:05:42.454 "unmap": true, 00:05:42.454 "flush": true, 00:05:42.454 "reset": true, 00:05:42.454 "nvme_admin": false, 00:05:42.454 "nvme_io": false, 00:05:42.454 "nvme_io_md": false, 00:05:42.454 "write_zeroes": true, 00:05:42.454 "zcopy": true, 00:05:42.454 "get_zone_info": false, 00:05:42.454 "zone_management": false, 00:05:42.454 "zone_append": false, 00:05:42.454 "compare": false, 00:05:42.454 "compare_and_write": false, 00:05:42.454 "abort": true, 00:05:42.454 "seek_hole": false, 00:05:42.454 "seek_data": false, 00:05:42.454 "copy": true, 00:05:42.454 "nvme_iov_md": false 00:05:42.454 }, 00:05:42.454 "memory_domains": [ 00:05:42.454 { 00:05:42.454 "dma_device_id": "system", 00:05:42.454 "dma_device_type": 1 00:05:42.454 }, 00:05:42.454 { 00:05:42.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.454 "dma_device_type": 2 00:05:42.454 } 00:05:42.454 ], 00:05:42.454 "driver_specific": {} 00:05:42.454 }, 00:05:42.454 { 00:05:42.454 "name": "Passthru0", 00:05:42.454 "aliases": [ 00:05:42.454 "758c3dfc-1655-5fe5-a3e8-be54d993c1aa" 00:05:42.454 ], 00:05:42.454 "product_name": "passthru", 00:05:42.454 "block_size": 512, 00:05:42.454 "num_blocks": 16384, 00:05:42.454 "uuid": "758c3dfc-1655-5fe5-a3e8-be54d993c1aa", 00:05:42.454 "assigned_rate_limits": { 00:05:42.454 "rw_ios_per_sec": 0, 00:05:42.454 "rw_mbytes_per_sec": 0, 00:05:42.454 "r_mbytes_per_sec": 0, 00:05:42.454 "w_mbytes_per_sec": 0 00:05:42.454 }, 00:05:42.454 "claimed": false, 00:05:42.454 "zoned": false, 00:05:42.454 "supported_io_types": { 00:05:42.454 "read": true, 00:05:42.454 "write": true, 00:05:42.454 "unmap": true, 00:05:42.454 "flush": true, 00:05:42.454 "reset": true, 00:05:42.454 "nvme_admin": false, 00:05:42.454 "nvme_io": false, 00:05:42.454 "nvme_io_md": false, 00:05:42.454 "write_zeroes": true, 00:05:42.454 "zcopy": true, 00:05:42.454 "get_zone_info": false, 00:05:42.454 "zone_management": false, 00:05:42.454 "zone_append": false, 00:05:42.454 "compare": false, 00:05:42.454 "compare_and_write": false, 00:05:42.454 "abort": true, 00:05:42.454 "seek_hole": false, 00:05:42.454 "seek_data": false, 00:05:42.454 "copy": true, 00:05:42.454 "nvme_iov_md": false 00:05:42.454 }, 00:05:42.454 "memory_domains": [ 00:05:42.454 { 00:05:42.454 "dma_device_id": "system", 00:05:42.454 "dma_device_type": 1 00:05:42.454 }, 00:05:42.454 { 00:05:42.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.454 "dma_device_type": 2 00:05:42.454 } 00:05:42.454 ], 00:05:42.454 "driver_specific": { 00:05:42.454 "passthru": { 00:05:42.454 "name": "Passthru0", 00:05:42.454 "base_bdev_name": "Malloc2" 00:05:42.454 } 00:05:42.454 } 00:05:42.454 } 00:05:42.454 ]' 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.454 06:39:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:42.455 06:39:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.455 06:39:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.455 06:39:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.455 06:39:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:42.455 06:39:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.455 06:39:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.455 06:39:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.455 06:39:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:42.455 06:39:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:42.455 ************************************ 00:05:42.455 END TEST rpc_daemon_integrity 00:05:42.455 ************************************ 00:05:42.455 06:39:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:42.455 00:05:42.455 real 0m0.224s 00:05:42.455 user 0m0.128s 00:05:42.455 sys 0m0.030s 00:05:42.455 06:39:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.455 06:39:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.455 06:39:35 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:42.455 06:39:35 rpc -- rpc/rpc.sh@84 -- # killprocess 69377 00:05:42.455 06:39:35 rpc -- common/autotest_common.sh@954 -- # '[' -z 69377 ']' 00:05:42.455 06:39:35 rpc -- common/autotest_common.sh@958 -- # kill -0 69377 00:05:42.455 06:39:35 rpc -- common/autotest_common.sh@959 -- # uname 00:05:42.455 06:39:35 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.455 06:39:35 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69377 00:05:42.725 killing process with pid 69377 00:05:42.725 06:39:35 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.725 06:39:35 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.725 06:39:35 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69377' 00:05:42.725 06:39:35 rpc -- common/autotest_common.sh@973 -- # kill 69377 00:05:42.725 06:39:35 rpc -- common/autotest_common.sh@978 -- # wait 69377 00:05:42.983 00:05:42.983 real 0m2.312s 00:05:42.983 user 0m2.782s 00:05:42.983 sys 0m0.541s 00:05:42.983 06:39:35 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.983 06:39:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.983 ************************************ 00:05:42.983 END TEST rpc 00:05:42.983 ************************************ 00:05:42.983 06:39:35 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:42.983 06:39:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.983 06:39:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.983 06:39:35 -- common/autotest_common.sh@10 -- # set +x 00:05:42.983 ************************************ 00:05:42.983 START TEST skip_rpc 00:05:42.983 ************************************ 00:05:42.983 06:39:35 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:42.983 * Looking for test storage... 00:05:42.983 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:42.983 06:39:35 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:42.983 06:39:35 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:42.983 06:39:35 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:42.983 06:39:36 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:42.983 06:39:36 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.983 06:39:36 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.983 06:39:36 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.983 06:39:36 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.983 06:39:36 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.983 06:39:36 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.983 06:39:36 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.983 06:39:36 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.983 06:39:36 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.983 06:39:36 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.983 06:39:36 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.983 06:39:36 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:42.983 06:39:36 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:42.983 06:39:36 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.983 06:39:36 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.983 06:39:36 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:42.983 06:39:36 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:42.983 06:39:36 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.983 06:39:36 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:42.983 06:39:36 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.983 06:39:36 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:42.983 06:39:36 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:42.983 06:39:36 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.983 06:39:36 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:42.983 06:39:36 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.983 06:39:36 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.983 06:39:36 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.983 06:39:36 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:42.983 06:39:36 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.983 06:39:36 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:42.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.983 --rc genhtml_branch_coverage=1 00:05:42.983 --rc genhtml_function_coverage=1 00:05:42.983 --rc genhtml_legend=1 00:05:42.983 --rc geninfo_all_blocks=1 00:05:42.983 --rc geninfo_unexecuted_blocks=1 00:05:42.983 00:05:42.983 ' 00:05:42.983 06:39:36 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:42.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.983 --rc genhtml_branch_coverage=1 00:05:42.983 --rc genhtml_function_coverage=1 00:05:42.983 --rc genhtml_legend=1 00:05:42.983 --rc geninfo_all_blocks=1 00:05:42.983 --rc geninfo_unexecuted_blocks=1 00:05:42.983 00:05:42.983 ' 00:05:42.983 06:39:36 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:42.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.983 --rc genhtml_branch_coverage=1 00:05:42.983 --rc genhtml_function_coverage=1 00:05:42.983 --rc genhtml_legend=1 00:05:42.983 --rc geninfo_all_blocks=1 00:05:42.984 --rc geninfo_unexecuted_blocks=1 00:05:42.984 00:05:42.984 ' 00:05:42.984 06:39:36 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:42.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.984 --rc genhtml_branch_coverage=1 00:05:42.984 --rc genhtml_function_coverage=1 00:05:42.984 --rc genhtml_legend=1 00:05:42.984 --rc geninfo_all_blocks=1 00:05:42.984 --rc geninfo_unexecuted_blocks=1 00:05:42.984 00:05:42.984 ' 00:05:42.984 06:39:36 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:42.984 06:39:36 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:42.984 06:39:36 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:42.984 06:39:36 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.984 06:39:36 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.984 06:39:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.984 ************************************ 00:05:42.984 START TEST skip_rpc 00:05:42.984 ************************************ 00:05:42.984 06:39:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:42.984 06:39:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69579 00:05:42.984 06:39:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:42.984 06:39:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:42.984 06:39:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:43.243 [2024-11-18 06:39:36.105110] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:05:43.243 [2024-11-18 06:39:36.105318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69579 ] 00:05:43.243 [2024-11-18 06:39:36.265023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.243 [2024-11-18 06:39:36.284767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.534 06:39:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:48.534 06:39:41 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:48.534 06:39:41 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:48.534 06:39:41 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:48.534 06:39:41 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.534 06:39:41 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:48.534 06:39:41 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.534 06:39:41 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:48.534 06:39:41 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.534 06:39:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.534 06:39:41 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:48.534 06:39:41 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:48.534 06:39:41 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:48.534 06:39:41 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:48.534 06:39:41 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:48.534 06:39:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:48.534 06:39:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69579 00:05:48.534 06:39:41 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 69579 ']' 00:05:48.534 06:39:41 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 69579 00:05:48.534 06:39:41 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:48.534 06:39:41 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.534 06:39:41 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69579 00:05:48.534 killing process with pid 69579 00:05:48.534 06:39:41 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.534 06:39:41 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.534 06:39:41 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69579' 00:05:48.534 06:39:41 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 69579 00:05:48.534 06:39:41 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 69579 00:05:48.534 ************************************ 00:05:48.534 END TEST skip_rpc 00:05:48.534 ************************************ 00:05:48.534 00:05:48.534 real 0m5.238s 00:05:48.534 user 0m4.873s 00:05:48.534 sys 0m0.257s 00:05:48.534 06:39:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.534 06:39:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.534 06:39:41 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:48.534 06:39:41 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.534 06:39:41 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.534 06:39:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.534 ************************************ 00:05:48.534 START TEST skip_rpc_with_json 00:05:48.534 ************************************ 00:05:48.534 06:39:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:48.534 06:39:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:48.535 06:39:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69666 00:05:48.535 06:39:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.535 06:39:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.535 06:39:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69666 00:05:48.535 06:39:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 69666 ']' 00:05:48.535 06:39:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.535 06:39:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.535 06:39:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.535 06:39:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.535 06:39:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.535 [2024-11-18 06:39:41.372129] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:05:48.535 [2024-11-18 06:39:41.372222] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69666 ] 00:05:48.535 [2024-11-18 06:39:41.517884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.535 [2024-11-18 06:39:41.534227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.107 06:39:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.107 06:39:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:49.107 06:39:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:49.107 06:39:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.107 06:39:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.107 [2024-11-18 06:39:42.163789] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:49.107 request: 00:05:49.107 { 00:05:49.107 "trtype": "tcp", 00:05:49.107 "method": "nvmf_get_transports", 00:05:49.107 "req_id": 1 00:05:49.107 } 00:05:49.107 Got JSON-RPC error response 00:05:49.107 response: 00:05:49.107 { 00:05:49.107 "code": -19, 00:05:49.107 "message": "No such device" 00:05:49.107 } 00:05:49.107 06:39:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:49.107 06:39:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:49.107 06:39:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.107 06:39:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.107 [2024-11-18 06:39:42.175868] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:49.107 06:39:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.107 06:39:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:49.107 06:39:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.107 06:39:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.369 06:39:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.369 06:39:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:49.370 { 00:05:49.370 "subsystems": [ 00:05:49.370 { 00:05:49.370 "subsystem": "fsdev", 00:05:49.370 "config": [ 00:05:49.370 { 00:05:49.370 "method": "fsdev_set_opts", 00:05:49.370 "params": { 00:05:49.370 "fsdev_io_pool_size": 65535, 00:05:49.370 "fsdev_io_cache_size": 256 00:05:49.370 } 00:05:49.370 } 00:05:49.370 ] 00:05:49.370 }, 00:05:49.370 { 00:05:49.370 "subsystem": "keyring", 00:05:49.370 "config": [] 00:05:49.370 }, 00:05:49.370 { 00:05:49.370 "subsystem": "iobuf", 00:05:49.370 "config": [ 00:05:49.370 { 00:05:49.370 "method": "iobuf_set_options", 00:05:49.370 "params": { 00:05:49.370 "small_pool_count": 8192, 00:05:49.370 "large_pool_count": 1024, 00:05:49.370 "small_bufsize": 8192, 00:05:49.370 "large_bufsize": 135168, 00:05:49.370 "enable_numa": false 00:05:49.370 } 00:05:49.370 } 00:05:49.370 ] 00:05:49.370 }, 00:05:49.370 { 00:05:49.370 "subsystem": "sock", 00:05:49.370 "config": [ 00:05:49.370 { 00:05:49.370 "method": "sock_set_default_impl", 00:05:49.370 "params": { 00:05:49.370 "impl_name": "posix" 00:05:49.370 } 00:05:49.370 }, 00:05:49.370 { 00:05:49.370 "method": "sock_impl_set_options", 00:05:49.370 "params": { 00:05:49.370 "impl_name": "ssl", 00:05:49.370 "recv_buf_size": 4096, 00:05:49.370 "send_buf_size": 4096, 00:05:49.370 "enable_recv_pipe": true, 00:05:49.370 "enable_quickack": false, 00:05:49.370 "enable_placement_id": 0, 00:05:49.370 "enable_zerocopy_send_server": true, 00:05:49.370 "enable_zerocopy_send_client": false, 00:05:49.370 "zerocopy_threshold": 0, 00:05:49.370 "tls_version": 0, 00:05:49.370 "enable_ktls": false 00:05:49.370 } 00:05:49.370 }, 00:05:49.370 { 00:05:49.370 "method": "sock_impl_set_options", 00:05:49.370 "params": { 00:05:49.370 "impl_name": "posix", 00:05:49.370 "recv_buf_size": 2097152, 00:05:49.370 "send_buf_size": 2097152, 00:05:49.370 "enable_recv_pipe": true, 00:05:49.370 "enable_quickack": false, 00:05:49.370 "enable_placement_id": 0, 00:05:49.370 "enable_zerocopy_send_server": true, 00:05:49.370 "enable_zerocopy_send_client": false, 00:05:49.370 "zerocopy_threshold": 0, 00:05:49.370 "tls_version": 0, 00:05:49.370 "enable_ktls": false 00:05:49.370 } 00:05:49.370 } 00:05:49.370 ] 00:05:49.370 }, 00:05:49.370 { 00:05:49.370 "subsystem": "vmd", 00:05:49.370 "config": [] 00:05:49.370 }, 00:05:49.370 { 00:05:49.370 "subsystem": "accel", 00:05:49.370 "config": [ 00:05:49.370 { 00:05:49.370 "method": "accel_set_options", 00:05:49.370 "params": { 00:05:49.370 "small_cache_size": 128, 00:05:49.370 "large_cache_size": 16, 00:05:49.370 "task_count": 2048, 00:05:49.370 "sequence_count": 2048, 00:05:49.370 "buf_count": 2048 00:05:49.370 } 00:05:49.370 } 00:05:49.370 ] 00:05:49.370 }, 00:05:49.370 { 00:05:49.370 "subsystem": "bdev", 00:05:49.370 "config": [ 00:05:49.370 { 00:05:49.370 "method": "bdev_set_options", 00:05:49.370 "params": { 00:05:49.370 "bdev_io_pool_size": 65535, 00:05:49.370 "bdev_io_cache_size": 256, 00:05:49.370 "bdev_auto_examine": true, 00:05:49.370 "iobuf_small_cache_size": 128, 00:05:49.370 "iobuf_large_cache_size": 16 00:05:49.370 } 00:05:49.370 }, 00:05:49.370 { 00:05:49.370 "method": "bdev_raid_set_options", 00:05:49.370 "params": { 00:05:49.370 "process_window_size_kb": 1024, 00:05:49.370 "process_max_bandwidth_mb_sec": 0 00:05:49.370 } 00:05:49.370 }, 00:05:49.370 { 00:05:49.370 "method": "bdev_iscsi_set_options", 00:05:49.370 "params": { 00:05:49.370 "timeout_sec": 30 00:05:49.370 } 00:05:49.370 }, 00:05:49.370 { 00:05:49.370 "method": "bdev_nvme_set_options", 00:05:49.370 "params": { 00:05:49.370 "action_on_timeout": "none", 00:05:49.370 "timeout_us": 0, 00:05:49.370 "timeout_admin_us": 0, 00:05:49.370 "keep_alive_timeout_ms": 10000, 00:05:49.370 "arbitration_burst": 0, 00:05:49.370 "low_priority_weight": 0, 00:05:49.370 "medium_priority_weight": 0, 00:05:49.370 "high_priority_weight": 0, 00:05:49.370 "nvme_adminq_poll_period_us": 10000, 00:05:49.370 "nvme_ioq_poll_period_us": 0, 00:05:49.370 "io_queue_requests": 0, 00:05:49.370 "delay_cmd_submit": true, 00:05:49.370 "transport_retry_count": 4, 00:05:49.370 "bdev_retry_count": 3, 00:05:49.370 "transport_ack_timeout": 0, 00:05:49.370 "ctrlr_loss_timeout_sec": 0, 00:05:49.370 "reconnect_delay_sec": 0, 00:05:49.370 "fast_io_fail_timeout_sec": 0, 00:05:49.370 "disable_auto_failback": false, 00:05:49.370 "generate_uuids": false, 00:05:49.370 "transport_tos": 0, 00:05:49.370 "nvme_error_stat": false, 00:05:49.370 "rdma_srq_size": 0, 00:05:49.370 "io_path_stat": false, 00:05:49.370 "allow_accel_sequence": false, 00:05:49.370 "rdma_max_cq_size": 0, 00:05:49.370 "rdma_cm_event_timeout_ms": 0, 00:05:49.370 "dhchap_digests": [ 00:05:49.370 "sha256", 00:05:49.370 "sha384", 00:05:49.370 "sha512" 00:05:49.370 ], 00:05:49.370 "dhchap_dhgroups": [ 00:05:49.370 "null", 00:05:49.370 "ffdhe2048", 00:05:49.370 "ffdhe3072", 00:05:49.370 "ffdhe4096", 00:05:49.370 "ffdhe6144", 00:05:49.370 "ffdhe8192" 00:05:49.370 ] 00:05:49.370 } 00:05:49.370 }, 00:05:49.370 { 00:05:49.370 "method": "bdev_nvme_set_hotplug", 00:05:49.370 "params": { 00:05:49.370 "period_us": 100000, 00:05:49.370 "enable": false 00:05:49.370 } 00:05:49.370 }, 00:05:49.370 { 00:05:49.370 "method": "bdev_wait_for_examine" 00:05:49.370 } 00:05:49.370 ] 00:05:49.370 }, 00:05:49.370 { 00:05:49.370 "subsystem": "scsi", 00:05:49.370 "config": null 00:05:49.370 }, 00:05:49.370 { 00:05:49.370 "subsystem": "scheduler", 00:05:49.370 "config": [ 00:05:49.370 { 00:05:49.370 "method": "framework_set_scheduler", 00:05:49.370 "params": { 00:05:49.370 "name": "static" 00:05:49.370 } 00:05:49.370 } 00:05:49.370 ] 00:05:49.370 }, 00:05:49.370 { 00:05:49.370 "subsystem": "vhost_scsi", 00:05:49.370 "config": [] 00:05:49.370 }, 00:05:49.370 { 00:05:49.370 "subsystem": "vhost_blk", 00:05:49.370 "config": [] 00:05:49.370 }, 00:05:49.370 { 00:05:49.370 "subsystem": "ublk", 00:05:49.370 "config": [] 00:05:49.370 }, 00:05:49.370 { 00:05:49.370 "subsystem": "nbd", 00:05:49.370 "config": [] 00:05:49.370 }, 00:05:49.370 { 00:05:49.370 "subsystem": "nvmf", 00:05:49.370 "config": [ 00:05:49.370 { 00:05:49.370 "method": "nvmf_set_config", 00:05:49.370 "params": { 00:05:49.370 "discovery_filter": "match_any", 00:05:49.370 "admin_cmd_passthru": { 00:05:49.370 "identify_ctrlr": false 00:05:49.370 }, 00:05:49.370 "dhchap_digests": [ 00:05:49.370 "sha256", 00:05:49.370 "sha384", 00:05:49.370 "sha512" 00:05:49.370 ], 00:05:49.370 "dhchap_dhgroups": [ 00:05:49.370 "null", 00:05:49.370 "ffdhe2048", 00:05:49.370 "ffdhe3072", 00:05:49.370 "ffdhe4096", 00:05:49.370 "ffdhe6144", 00:05:49.370 "ffdhe8192" 00:05:49.370 ] 00:05:49.370 } 00:05:49.370 }, 00:05:49.370 { 00:05:49.370 "method": "nvmf_set_max_subsystems", 00:05:49.370 "params": { 00:05:49.370 "max_subsystems": 1024 00:05:49.370 } 00:05:49.370 }, 00:05:49.370 { 00:05:49.370 "method": "nvmf_set_crdt", 00:05:49.370 "params": { 00:05:49.370 "crdt1": 0, 00:05:49.370 "crdt2": 0, 00:05:49.370 "crdt3": 0 00:05:49.370 } 00:05:49.370 }, 00:05:49.370 { 00:05:49.370 "method": "nvmf_create_transport", 00:05:49.370 "params": { 00:05:49.370 "trtype": "TCP", 00:05:49.370 "max_queue_depth": 128, 00:05:49.370 "max_io_qpairs_per_ctrlr": 127, 00:05:49.370 "in_capsule_data_size": 4096, 00:05:49.370 "max_io_size": 131072, 00:05:49.370 "io_unit_size": 131072, 00:05:49.370 "max_aq_depth": 128, 00:05:49.370 "num_shared_buffers": 511, 00:05:49.370 "buf_cache_size": 4294967295, 00:05:49.370 "dif_insert_or_strip": false, 00:05:49.370 "zcopy": false, 00:05:49.370 "c2h_success": true, 00:05:49.370 "sock_priority": 0, 00:05:49.370 "abort_timeout_sec": 1, 00:05:49.370 "ack_timeout": 0, 00:05:49.370 "data_wr_pool_size": 0 00:05:49.370 } 00:05:49.370 } 00:05:49.370 ] 00:05:49.370 }, 00:05:49.370 { 00:05:49.370 "subsystem": "iscsi", 00:05:49.370 "config": [ 00:05:49.370 { 00:05:49.370 "method": "iscsi_set_options", 00:05:49.370 "params": { 00:05:49.370 "node_base": "iqn.2016-06.io.spdk", 00:05:49.370 "max_sessions": 128, 00:05:49.370 "max_connections_per_session": 2, 00:05:49.370 "max_queue_depth": 64, 00:05:49.370 "default_time2wait": 2, 00:05:49.370 "default_time2retain": 20, 00:05:49.370 "first_burst_length": 8192, 00:05:49.370 "immediate_data": true, 00:05:49.370 "allow_duplicated_isid": false, 00:05:49.370 "error_recovery_level": 0, 00:05:49.371 "nop_timeout": 60, 00:05:49.371 "nop_in_interval": 30, 00:05:49.371 "disable_chap": false, 00:05:49.371 "require_chap": false, 00:05:49.371 "mutual_chap": false, 00:05:49.371 "chap_group": 0, 00:05:49.371 "max_large_datain_per_connection": 64, 00:05:49.371 "max_r2t_per_connection": 4, 00:05:49.371 "pdu_pool_size": 36864, 00:05:49.371 "immediate_data_pool_size": 16384, 00:05:49.371 "data_out_pool_size": 2048 00:05:49.371 } 00:05:49.371 } 00:05:49.371 ] 00:05:49.371 } 00:05:49.371 ] 00:05:49.371 } 00:05:49.371 06:39:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:49.371 06:39:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69666 00:05:49.371 06:39:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69666 ']' 00:05:49.371 06:39:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69666 00:05:49.371 06:39:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:49.371 06:39:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.371 06:39:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69666 00:05:49.371 killing process with pid 69666 00:05:49.371 06:39:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.371 06:39:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.371 06:39:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69666' 00:05:49.371 06:39:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69666 00:05:49.371 06:39:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69666 00:05:49.632 06:39:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69689 00:05:49.632 06:39:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:49.633 06:39:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:54.925 06:39:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69689 00:05:54.925 06:39:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69689 ']' 00:05:54.925 06:39:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69689 00:05:54.925 06:39:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:54.925 06:39:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.925 06:39:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69689 00:05:54.925 killing process with pid 69689 00:05:54.925 06:39:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.925 06:39:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.925 06:39:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69689' 00:05:54.925 06:39:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69689 00:05:54.925 06:39:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69689 00:05:54.925 06:39:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:54.925 06:39:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:54.925 00:05:54.925 real 0m6.509s 00:05:54.925 user 0m6.208s 00:05:54.925 sys 0m0.476s 00:05:54.925 06:39:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.925 ************************************ 00:05:54.925 END TEST skip_rpc_with_json 00:05:54.925 06:39:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.925 ************************************ 00:05:54.925 06:39:47 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:54.925 06:39:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.925 06:39:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.925 06:39:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.925 ************************************ 00:05:54.925 START TEST skip_rpc_with_delay 00:05:54.925 ************************************ 00:05:54.925 06:39:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:54.925 06:39:47 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:54.925 06:39:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:54.925 06:39:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:54.925 06:39:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:54.925 06:39:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.925 06:39:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:54.925 06:39:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.925 06:39:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:54.925 06:39:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.925 06:39:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:54.925 06:39:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:54.925 06:39:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:54.925 [2024-11-18 06:39:47.955056] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:54.925 06:39:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:54.925 06:39:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:54.925 06:39:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:54.925 06:39:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:54.925 00:05:54.925 real 0m0.125s 00:05:54.925 user 0m0.068s 00:05:54.925 sys 0m0.056s 00:05:54.925 06:39:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.925 ************************************ 00:05:54.925 END TEST skip_rpc_with_delay 00:05:54.925 ************************************ 00:05:54.925 06:39:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:55.187 06:39:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:55.187 06:39:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:55.187 06:39:48 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:55.187 06:39:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.187 06:39:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.187 06:39:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.187 ************************************ 00:05:55.187 START TEST exit_on_failed_rpc_init 00:05:55.187 ************************************ 00:05:55.187 06:39:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:55.187 06:39:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69801 00:05:55.187 06:39:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69801 00:05:55.187 06:39:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 69801 ']' 00:05:55.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.187 06:39:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.187 06:39:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.187 06:39:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.187 06:39:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.187 06:39:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.187 06:39:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:55.187 [2024-11-18 06:39:48.138896] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:05:55.187 [2024-11-18 06:39:48.139274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69801 ] 00:05:55.447 [2024-11-18 06:39:48.296359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.447 [2024-11-18 06:39:48.318398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.018 06:39:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.018 06:39:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:56.018 06:39:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.018 06:39:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:56.019 06:39:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:56.019 06:39:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:56.019 06:39:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:56.019 06:39:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.019 06:39:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:56.019 06:39:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.019 06:39:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:56.019 06:39:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.019 06:39:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:56.019 06:39:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:56.019 06:39:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:56.019 [2024-11-18 06:39:49.048609] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:05:56.019 [2024-11-18 06:39:49.048723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69813 ] 00:05:56.278 [2024-11-18 06:39:49.206309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.278 [2024-11-18 06:39:49.224056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.278 [2024-11-18 06:39:49.224258] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:56.278 [2024-11-18 06:39:49.224279] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:56.278 [2024-11-18 06:39:49.224287] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:56.278 06:39:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:56.278 06:39:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:56.278 06:39:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:56.278 06:39:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:56.278 06:39:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:56.278 06:39:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:56.278 06:39:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:56.278 06:39:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69801 00:05:56.278 06:39:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 69801 ']' 00:05:56.278 06:39:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 69801 00:05:56.278 06:39:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:56.278 06:39:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.278 06:39:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69801 00:05:56.278 killing process with pid 69801 00:05:56.278 06:39:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.278 06:39:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.278 06:39:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69801' 00:05:56.278 06:39:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 69801 00:05:56.278 06:39:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 69801 00:05:56.537 ************************************ 00:05:56.537 END TEST exit_on_failed_rpc_init 00:05:56.537 ************************************ 00:05:56.537 00:05:56.537 real 0m1.471s 00:05:56.537 user 0m1.617s 00:05:56.537 sys 0m0.378s 00:05:56.537 06:39:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.537 06:39:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:56.538 06:39:49 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:56.538 ************************************ 00:05:56.538 END TEST skip_rpc 00:05:56.538 ************************************ 00:05:56.538 00:05:56.538 real 0m13.687s 00:05:56.538 user 0m12.900s 00:05:56.538 sys 0m1.336s 00:05:56.538 06:39:49 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.538 06:39:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.538 06:39:49 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:56.538 06:39:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.538 06:39:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.538 06:39:49 -- common/autotest_common.sh@10 -- # set +x 00:05:56.798 ************************************ 00:05:56.798 START TEST rpc_client 00:05:56.798 ************************************ 00:05:56.798 06:39:49 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:56.798 * Looking for test storage... 00:05:56.798 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:56.798 06:39:49 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:56.798 06:39:49 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:56.798 06:39:49 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:56.798 06:39:49 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:56.798 06:39:49 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.798 06:39:49 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.798 06:39:49 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.798 06:39:49 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.798 06:39:49 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.798 06:39:49 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.798 06:39:49 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.798 06:39:49 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.798 06:39:49 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.798 06:39:49 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.798 06:39:49 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.798 06:39:49 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:56.798 06:39:49 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:56.798 06:39:49 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.798 06:39:49 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.798 06:39:49 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:56.798 06:39:49 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:56.798 06:39:49 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.798 06:39:49 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:56.798 06:39:49 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.798 06:39:49 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:56.798 06:39:49 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:56.798 06:39:49 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.798 06:39:49 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:56.798 06:39:49 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.798 06:39:49 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.798 06:39:49 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.798 06:39:49 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:56.798 06:39:49 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.798 06:39:49 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:56.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.798 --rc genhtml_branch_coverage=1 00:05:56.798 --rc genhtml_function_coverage=1 00:05:56.798 --rc genhtml_legend=1 00:05:56.798 --rc geninfo_all_blocks=1 00:05:56.798 --rc geninfo_unexecuted_blocks=1 00:05:56.798 00:05:56.798 ' 00:05:56.798 06:39:49 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:56.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.798 --rc genhtml_branch_coverage=1 00:05:56.798 --rc genhtml_function_coverage=1 00:05:56.798 --rc genhtml_legend=1 00:05:56.798 --rc geninfo_all_blocks=1 00:05:56.798 --rc geninfo_unexecuted_blocks=1 00:05:56.798 00:05:56.798 ' 00:05:56.798 06:39:49 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:56.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.798 --rc genhtml_branch_coverage=1 00:05:56.798 --rc genhtml_function_coverage=1 00:05:56.798 --rc genhtml_legend=1 00:05:56.798 --rc geninfo_all_blocks=1 00:05:56.798 --rc geninfo_unexecuted_blocks=1 00:05:56.798 00:05:56.798 ' 00:05:56.798 06:39:49 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:56.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.798 --rc genhtml_branch_coverage=1 00:05:56.798 --rc genhtml_function_coverage=1 00:05:56.798 --rc genhtml_legend=1 00:05:56.798 --rc geninfo_all_blocks=1 00:05:56.798 --rc geninfo_unexecuted_blocks=1 00:05:56.798 00:05:56.798 ' 00:05:56.798 06:39:49 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:56.798 OK 00:05:56.798 06:39:49 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:56.798 00:05:56.798 real 0m0.219s 00:05:56.798 user 0m0.134s 00:05:56.798 sys 0m0.089s 00:05:56.799 ************************************ 00:05:56.799 END TEST rpc_client 00:05:56.799 ************************************ 00:05:56.799 06:39:49 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.799 06:39:49 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:57.061 06:39:49 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:57.061 06:39:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.061 06:39:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.061 06:39:49 -- common/autotest_common.sh@10 -- # set +x 00:05:57.061 ************************************ 00:05:57.061 START TEST json_config 00:05:57.061 ************************************ 00:05:57.061 06:39:49 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:57.061 06:39:49 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:57.061 06:39:49 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:57.061 06:39:49 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:57.061 06:39:50 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:57.061 06:39:50 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.061 06:39:50 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.061 06:39:50 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.061 06:39:50 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.061 06:39:50 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.061 06:39:50 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.061 06:39:50 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.061 06:39:50 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.061 06:39:50 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.061 06:39:50 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.061 06:39:50 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.061 06:39:50 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:57.061 06:39:50 json_config -- scripts/common.sh@345 -- # : 1 00:05:57.061 06:39:50 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.061 06:39:50 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.061 06:39:50 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:57.061 06:39:50 json_config -- scripts/common.sh@353 -- # local d=1 00:05:57.061 06:39:50 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.061 06:39:50 json_config -- scripts/common.sh@355 -- # echo 1 00:05:57.061 06:39:50 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.061 06:39:50 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:57.061 06:39:50 json_config -- scripts/common.sh@353 -- # local d=2 00:05:57.061 06:39:50 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.061 06:39:50 json_config -- scripts/common.sh@355 -- # echo 2 00:05:57.061 06:39:50 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.061 06:39:50 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.061 06:39:50 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.061 06:39:50 json_config -- scripts/common.sh@368 -- # return 0 00:05:57.061 06:39:50 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.061 06:39:50 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:57.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.061 --rc genhtml_branch_coverage=1 00:05:57.061 --rc genhtml_function_coverage=1 00:05:57.061 --rc genhtml_legend=1 00:05:57.061 --rc geninfo_all_blocks=1 00:05:57.061 --rc geninfo_unexecuted_blocks=1 00:05:57.061 00:05:57.061 ' 00:05:57.061 06:39:50 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:57.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.061 --rc genhtml_branch_coverage=1 00:05:57.061 --rc genhtml_function_coverage=1 00:05:57.061 --rc genhtml_legend=1 00:05:57.061 --rc geninfo_all_blocks=1 00:05:57.061 --rc geninfo_unexecuted_blocks=1 00:05:57.061 00:05:57.061 ' 00:05:57.061 06:39:50 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:57.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.061 --rc genhtml_branch_coverage=1 00:05:57.061 --rc genhtml_function_coverage=1 00:05:57.061 --rc genhtml_legend=1 00:05:57.061 --rc geninfo_all_blocks=1 00:05:57.061 --rc geninfo_unexecuted_blocks=1 00:05:57.061 00:05:57.061 ' 00:05:57.061 06:39:50 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:57.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.061 --rc genhtml_branch_coverage=1 00:05:57.061 --rc genhtml_function_coverage=1 00:05:57.061 --rc genhtml_legend=1 00:05:57.061 --rc geninfo_all_blocks=1 00:05:57.061 --rc geninfo_unexecuted_blocks=1 00:05:57.061 00:05:57.061 ' 00:05:57.061 06:39:50 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:57.061 06:39:50 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:57.061 06:39:50 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:57.061 06:39:50 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:57.061 06:39:50 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:57.061 06:39:50 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:57.061 06:39:50 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:57.061 06:39:50 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:57.061 06:39:50 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:57.061 06:39:50 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:57.061 06:39:50 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:57.061 06:39:50 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:57.061 06:39:50 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a30b9165-d26e-42a9-8b3c-daabdf272c4b 00:05:57.061 06:39:50 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=a30b9165-d26e-42a9-8b3c-daabdf272c4b 00:05:57.061 06:39:50 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:57.061 06:39:50 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:57.061 06:39:50 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:57.061 06:39:50 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:57.061 06:39:50 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:57.061 06:39:50 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:57.061 06:39:50 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.061 06:39:50 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.061 06:39:50 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.061 06:39:50 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.061 06:39:50 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.061 06:39:50 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.061 06:39:50 json_config -- paths/export.sh@5 -- # export PATH 00:05:57.061 06:39:50 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.061 06:39:50 json_config -- nvmf/common.sh@51 -- # : 0 00:05:57.061 06:39:50 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:57.061 06:39:50 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:57.061 06:39:50 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:57.061 06:39:50 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:57.061 06:39:50 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:57.061 06:39:50 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:57.061 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:57.061 06:39:50 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:57.061 06:39:50 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:57.061 06:39:50 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:57.061 06:39:50 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:57.061 06:39:50 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:57.062 06:39:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:57.062 06:39:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:57.062 06:39:50 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:57.062 06:39:50 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:57.062 WARNING: No tests are enabled so not running JSON configuration tests 00:05:57.062 06:39:50 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:57.062 00:05:57.062 real 0m0.154s 00:05:57.062 user 0m0.097s 00:05:57.062 sys 0m0.055s 00:05:57.062 06:39:50 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.062 06:39:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.062 ************************************ 00:05:57.062 END TEST json_config 00:05:57.062 ************************************ 00:05:57.062 06:39:50 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:57.062 06:39:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.062 06:39:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.062 06:39:50 -- common/autotest_common.sh@10 -- # set +x 00:05:57.062 ************************************ 00:05:57.062 START TEST json_config_extra_key 00:05:57.062 ************************************ 00:05:57.062 06:39:50 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:57.324 06:39:50 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:57.324 06:39:50 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:57.324 06:39:50 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:57.324 06:39:50 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:57.324 06:39:50 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.324 06:39:50 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.324 06:39:50 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.324 06:39:50 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.324 06:39:50 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.324 06:39:50 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.324 06:39:50 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.324 06:39:50 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.324 06:39:50 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.324 06:39:50 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.324 06:39:50 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.324 06:39:50 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:57.324 06:39:50 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:57.324 06:39:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.324 06:39:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.324 06:39:50 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:57.324 06:39:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:57.324 06:39:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.324 06:39:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:57.324 06:39:50 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.324 06:39:50 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:57.324 06:39:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:57.324 06:39:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.324 06:39:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:57.324 06:39:50 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.324 06:39:50 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.324 06:39:50 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.324 06:39:50 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:57.324 06:39:50 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.324 06:39:50 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:57.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.324 --rc genhtml_branch_coverage=1 00:05:57.324 --rc genhtml_function_coverage=1 00:05:57.324 --rc genhtml_legend=1 00:05:57.324 --rc geninfo_all_blocks=1 00:05:57.324 --rc geninfo_unexecuted_blocks=1 00:05:57.324 00:05:57.324 ' 00:05:57.324 06:39:50 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:57.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.324 --rc genhtml_branch_coverage=1 00:05:57.324 --rc genhtml_function_coverage=1 00:05:57.324 --rc genhtml_legend=1 00:05:57.324 --rc geninfo_all_blocks=1 00:05:57.324 --rc geninfo_unexecuted_blocks=1 00:05:57.324 00:05:57.324 ' 00:05:57.324 06:39:50 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:57.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.324 --rc genhtml_branch_coverage=1 00:05:57.324 --rc genhtml_function_coverage=1 00:05:57.324 --rc genhtml_legend=1 00:05:57.324 --rc geninfo_all_blocks=1 00:05:57.324 --rc geninfo_unexecuted_blocks=1 00:05:57.324 00:05:57.324 ' 00:05:57.324 06:39:50 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:57.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.324 --rc genhtml_branch_coverage=1 00:05:57.324 --rc genhtml_function_coverage=1 00:05:57.324 --rc genhtml_legend=1 00:05:57.324 --rc geninfo_all_blocks=1 00:05:57.324 --rc geninfo_unexecuted_blocks=1 00:05:57.324 00:05:57.324 ' 00:05:57.324 06:39:50 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:57.324 06:39:50 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:57.324 06:39:50 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:57.324 06:39:50 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:57.324 06:39:50 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:57.324 06:39:50 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:57.324 06:39:50 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:57.324 06:39:50 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:57.324 06:39:50 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:57.324 06:39:50 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:57.324 06:39:50 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:57.324 06:39:50 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:57.324 06:39:50 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a30b9165-d26e-42a9-8b3c-daabdf272c4b 00:05:57.324 06:39:50 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=a30b9165-d26e-42a9-8b3c-daabdf272c4b 00:05:57.324 06:39:50 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:57.324 06:39:50 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:57.324 06:39:50 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:57.324 06:39:50 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:57.324 06:39:50 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:57.324 06:39:50 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:57.324 06:39:50 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.324 06:39:50 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.324 06:39:50 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.324 06:39:50 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.324 06:39:50 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.324 06:39:50 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.324 06:39:50 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:57.324 06:39:50 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.324 06:39:50 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:57.324 06:39:50 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:57.324 06:39:50 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:57.324 06:39:50 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:57.324 06:39:50 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:57.324 06:39:50 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:57.324 06:39:50 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:57.324 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:57.324 06:39:50 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:57.324 06:39:50 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:57.324 06:39:50 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:57.324 06:39:50 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:57.324 06:39:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:57.324 06:39:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:57.324 06:39:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:57.324 06:39:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:57.324 06:39:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:57.324 06:39:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:57.324 06:39:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:57.324 06:39:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:57.324 06:39:50 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:57.324 06:39:50 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:57.324 INFO: launching applications... 00:05:57.324 06:39:50 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:57.324 06:39:50 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:57.325 06:39:50 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:57.325 06:39:50 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:57.325 06:39:50 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:57.325 06:39:50 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:57.325 06:39:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.325 06:39:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.325 06:39:50 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69996 00:05:57.325 06:39:50 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:57.325 Waiting for target to run... 00:05:57.325 06:39:50 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69996 /var/tmp/spdk_tgt.sock 00:05:57.325 06:39:50 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 69996 ']' 00:05:57.325 06:39:50 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:57.325 06:39:50 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.325 06:39:50 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:57.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:57.325 06:39:50 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.325 06:39:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:57.325 06:39:50 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:57.325 [2024-11-18 06:39:50.350567] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:05:57.325 [2024-11-18 06:39:50.351106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69996 ] 00:05:57.894 [2024-11-18 06:39:50.748700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.894 [2024-11-18 06:39:50.758214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.155 06:39:51 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.155 00:05:58.155 INFO: shutting down applications... 00:05:58.155 06:39:51 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:58.155 06:39:51 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:58.155 06:39:51 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:58.155 06:39:51 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:58.155 06:39:51 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:58.155 06:39:51 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:58.155 06:39:51 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69996 ]] 00:05:58.155 06:39:51 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69996 00:05:58.155 06:39:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:58.155 06:39:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:58.155 06:39:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69996 00:05:58.155 06:39:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:58.758 06:39:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:58.758 06:39:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:58.758 06:39:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69996 00:05:58.758 06:39:51 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:58.758 06:39:51 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:58.758 06:39:51 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:58.758 06:39:51 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:58.758 SPDK target shutdown done 00:05:58.758 06:39:51 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:58.758 Success 00:05:58.758 00:05:58.758 real 0m1.576s 00:05:58.758 user 0m1.142s 00:05:58.758 sys 0m0.439s 00:05:58.758 06:39:51 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.758 ************************************ 00:05:58.758 06:39:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:58.758 END TEST json_config_extra_key 00:05:58.758 ************************************ 00:05:58.758 06:39:51 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:58.758 06:39:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.758 06:39:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.758 06:39:51 -- common/autotest_common.sh@10 -- # set +x 00:05:58.758 ************************************ 00:05:58.758 START TEST alias_rpc 00:05:58.758 ************************************ 00:05:58.759 06:39:51 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:58.759 * Looking for test storage... 00:05:58.759 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:58.759 06:39:51 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:58.759 06:39:51 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:58.759 06:39:51 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:59.021 06:39:51 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:59.021 06:39:51 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.021 06:39:51 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.021 06:39:51 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.021 06:39:51 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.021 06:39:51 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.021 06:39:51 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.021 06:39:51 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.021 06:39:51 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.021 06:39:51 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.021 06:39:51 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.021 06:39:51 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.021 06:39:51 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:59.021 06:39:51 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:59.021 06:39:51 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.021 06:39:51 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.021 06:39:51 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:59.021 06:39:51 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:59.021 06:39:51 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.021 06:39:51 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:59.021 06:39:51 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.021 06:39:51 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:59.021 06:39:51 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:59.021 06:39:51 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.021 06:39:51 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:59.021 06:39:51 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.021 06:39:51 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.021 06:39:51 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.021 06:39:51 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:59.021 06:39:51 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.021 06:39:51 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:59.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.021 --rc genhtml_branch_coverage=1 00:05:59.021 --rc genhtml_function_coverage=1 00:05:59.021 --rc genhtml_legend=1 00:05:59.021 --rc geninfo_all_blocks=1 00:05:59.021 --rc geninfo_unexecuted_blocks=1 00:05:59.021 00:05:59.021 ' 00:05:59.021 06:39:51 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:59.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.021 --rc genhtml_branch_coverage=1 00:05:59.021 --rc genhtml_function_coverage=1 00:05:59.021 --rc genhtml_legend=1 00:05:59.021 --rc geninfo_all_blocks=1 00:05:59.021 --rc geninfo_unexecuted_blocks=1 00:05:59.021 00:05:59.021 ' 00:05:59.021 06:39:51 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:59.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.021 --rc genhtml_branch_coverage=1 00:05:59.021 --rc genhtml_function_coverage=1 00:05:59.021 --rc genhtml_legend=1 00:05:59.021 --rc geninfo_all_blocks=1 00:05:59.021 --rc geninfo_unexecuted_blocks=1 00:05:59.021 00:05:59.021 ' 00:05:59.021 06:39:51 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:59.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.021 --rc genhtml_branch_coverage=1 00:05:59.021 --rc genhtml_function_coverage=1 00:05:59.021 --rc genhtml_legend=1 00:05:59.021 --rc geninfo_all_blocks=1 00:05:59.021 --rc geninfo_unexecuted_blocks=1 00:05:59.021 00:05:59.021 ' 00:05:59.021 06:39:51 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:59.021 06:39:51 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70069 00:05:59.021 06:39:51 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70069 00:05:59.021 06:39:51 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 70069 ']' 00:05:59.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.021 06:39:51 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.021 06:39:51 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.021 06:39:51 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.021 06:39:51 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.021 06:39:51 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.021 06:39:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.021 [2024-11-18 06:39:51.967962] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:05:59.021 [2024-11-18 06:39:51.968120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70069 ] 00:05:59.282 [2024-11-18 06:39:52.130445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.282 [2024-11-18 06:39:52.151345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.854 06:39:52 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.854 06:39:52 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:59.854 06:39:52 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:00.114 06:39:53 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70069 00:06:00.114 06:39:53 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 70069 ']' 00:06:00.114 06:39:53 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 70069 00:06:00.114 06:39:53 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:00.114 06:39:53 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.114 06:39:53 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70069 00:06:00.114 killing process with pid 70069 00:06:00.114 06:39:53 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.114 06:39:53 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.114 06:39:53 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70069' 00:06:00.114 06:39:53 alias_rpc -- common/autotest_common.sh@973 -- # kill 70069 00:06:00.114 06:39:53 alias_rpc -- common/autotest_common.sh@978 -- # wait 70069 00:06:00.379 ************************************ 00:06:00.380 END TEST alias_rpc 00:06:00.380 ************************************ 00:06:00.380 00:06:00.380 real 0m1.634s 00:06:00.380 user 0m1.745s 00:06:00.380 sys 0m0.434s 00:06:00.380 06:39:53 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.380 06:39:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.380 06:39:53 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:00.380 06:39:53 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:00.380 06:39:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.380 06:39:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.380 06:39:53 -- common/autotest_common.sh@10 -- # set +x 00:06:00.380 ************************************ 00:06:00.380 START TEST spdkcli_tcp 00:06:00.380 ************************************ 00:06:00.380 06:39:53 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:00.642 * Looking for test storage... 00:06:00.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:00.642 06:39:53 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:00.642 06:39:53 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:00.642 06:39:53 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:00.642 06:39:53 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:00.642 06:39:53 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.642 06:39:53 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.642 06:39:53 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.642 06:39:53 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.642 06:39:53 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.642 06:39:53 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.642 06:39:53 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.642 06:39:53 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.642 06:39:53 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.642 06:39:53 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.642 06:39:53 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.642 06:39:53 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:00.642 06:39:53 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:00.642 06:39:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.642 06:39:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.642 06:39:53 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:00.642 06:39:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:00.642 06:39:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.642 06:39:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:00.642 06:39:53 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.642 06:39:53 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:00.642 06:39:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:00.642 06:39:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.642 06:39:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:00.642 06:39:53 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.642 06:39:53 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.642 06:39:53 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.642 06:39:53 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:00.642 06:39:53 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.642 06:39:53 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:00.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.642 --rc genhtml_branch_coverage=1 00:06:00.642 --rc genhtml_function_coverage=1 00:06:00.642 --rc genhtml_legend=1 00:06:00.642 --rc geninfo_all_blocks=1 00:06:00.642 --rc geninfo_unexecuted_blocks=1 00:06:00.642 00:06:00.642 ' 00:06:00.642 06:39:53 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:00.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.642 --rc genhtml_branch_coverage=1 00:06:00.642 --rc genhtml_function_coverage=1 00:06:00.642 --rc genhtml_legend=1 00:06:00.642 --rc geninfo_all_blocks=1 00:06:00.642 --rc geninfo_unexecuted_blocks=1 00:06:00.642 00:06:00.642 ' 00:06:00.642 06:39:53 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:00.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.642 --rc genhtml_branch_coverage=1 00:06:00.642 --rc genhtml_function_coverage=1 00:06:00.642 --rc genhtml_legend=1 00:06:00.642 --rc geninfo_all_blocks=1 00:06:00.642 --rc geninfo_unexecuted_blocks=1 00:06:00.642 00:06:00.642 ' 00:06:00.642 06:39:53 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:00.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.642 --rc genhtml_branch_coverage=1 00:06:00.642 --rc genhtml_function_coverage=1 00:06:00.642 --rc genhtml_legend=1 00:06:00.642 --rc geninfo_all_blocks=1 00:06:00.642 --rc geninfo_unexecuted_blocks=1 00:06:00.642 00:06:00.642 ' 00:06:00.642 06:39:53 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:00.642 06:39:53 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:00.642 06:39:53 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:00.642 06:39:53 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:00.642 06:39:53 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:00.642 06:39:53 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:00.642 06:39:53 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:00.642 06:39:53 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:00.642 06:39:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.642 06:39:53 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70149 00:06:00.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.642 06:39:53 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70149 00:06:00.642 06:39:53 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 70149 ']' 00:06:00.642 06:39:53 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.642 06:39:53 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.642 06:39:53 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.642 06:39:53 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.642 06:39:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.642 06:39:53 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:00.642 [2024-11-18 06:39:53.684205] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:00.642 [2024-11-18 06:39:53.684348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70149 ] 00:06:00.901 [2024-11-18 06:39:53.838204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.901 [2024-11-18 06:39:53.868641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.901 [2024-11-18 06:39:53.868704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.474 06:39:54 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.474 06:39:54 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:01.474 06:39:54 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70166 00:06:01.474 06:39:54 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:01.474 06:39:54 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:01.737 [ 00:06:01.737 "bdev_malloc_delete", 00:06:01.737 "bdev_malloc_create", 00:06:01.737 "bdev_null_resize", 00:06:01.737 "bdev_null_delete", 00:06:01.737 "bdev_null_create", 00:06:01.737 "bdev_nvme_cuse_unregister", 00:06:01.737 "bdev_nvme_cuse_register", 00:06:01.737 "bdev_opal_new_user", 00:06:01.737 "bdev_opal_set_lock_state", 00:06:01.737 "bdev_opal_delete", 00:06:01.737 "bdev_opal_get_info", 00:06:01.737 "bdev_opal_create", 00:06:01.737 "bdev_nvme_opal_revert", 00:06:01.737 "bdev_nvme_opal_init", 00:06:01.737 "bdev_nvme_send_cmd", 00:06:01.737 "bdev_nvme_set_keys", 00:06:01.737 "bdev_nvme_get_path_iostat", 00:06:01.737 "bdev_nvme_get_mdns_discovery_info", 00:06:01.737 "bdev_nvme_stop_mdns_discovery", 00:06:01.737 "bdev_nvme_start_mdns_discovery", 00:06:01.737 "bdev_nvme_set_multipath_policy", 00:06:01.737 "bdev_nvme_set_preferred_path", 00:06:01.737 "bdev_nvme_get_io_paths", 00:06:01.737 "bdev_nvme_remove_error_injection", 00:06:01.737 "bdev_nvme_add_error_injection", 00:06:01.737 "bdev_nvme_get_discovery_info", 00:06:01.737 "bdev_nvme_stop_discovery", 00:06:01.737 "bdev_nvme_start_discovery", 00:06:01.737 "bdev_nvme_get_controller_health_info", 00:06:01.737 "bdev_nvme_disable_controller", 00:06:01.737 "bdev_nvme_enable_controller", 00:06:01.737 "bdev_nvme_reset_controller", 00:06:01.737 "bdev_nvme_get_transport_statistics", 00:06:01.737 "bdev_nvme_apply_firmware", 00:06:01.737 "bdev_nvme_detach_controller", 00:06:01.737 "bdev_nvme_get_controllers", 00:06:01.737 "bdev_nvme_attach_controller", 00:06:01.737 "bdev_nvme_set_hotplug", 00:06:01.737 "bdev_nvme_set_options", 00:06:01.737 "bdev_passthru_delete", 00:06:01.737 "bdev_passthru_create", 00:06:01.737 "bdev_lvol_set_parent_bdev", 00:06:01.737 "bdev_lvol_set_parent", 00:06:01.737 "bdev_lvol_check_shallow_copy", 00:06:01.737 "bdev_lvol_start_shallow_copy", 00:06:01.737 "bdev_lvol_grow_lvstore", 00:06:01.737 "bdev_lvol_get_lvols", 00:06:01.737 "bdev_lvol_get_lvstores", 00:06:01.737 "bdev_lvol_delete", 00:06:01.737 "bdev_lvol_set_read_only", 00:06:01.737 "bdev_lvol_resize", 00:06:01.737 "bdev_lvol_decouple_parent", 00:06:01.737 "bdev_lvol_inflate", 00:06:01.737 "bdev_lvol_rename", 00:06:01.737 "bdev_lvol_clone_bdev", 00:06:01.737 "bdev_lvol_clone", 00:06:01.737 "bdev_lvol_snapshot", 00:06:01.737 "bdev_lvol_create", 00:06:01.737 "bdev_lvol_delete_lvstore", 00:06:01.737 "bdev_lvol_rename_lvstore", 00:06:01.737 "bdev_lvol_create_lvstore", 00:06:01.737 "bdev_raid_set_options", 00:06:01.737 "bdev_raid_remove_base_bdev", 00:06:01.737 "bdev_raid_add_base_bdev", 00:06:01.737 "bdev_raid_delete", 00:06:01.737 "bdev_raid_create", 00:06:01.737 "bdev_raid_get_bdevs", 00:06:01.737 "bdev_error_inject_error", 00:06:01.737 "bdev_error_delete", 00:06:01.737 "bdev_error_create", 00:06:01.737 "bdev_split_delete", 00:06:01.737 "bdev_split_create", 00:06:01.737 "bdev_delay_delete", 00:06:01.737 "bdev_delay_create", 00:06:01.737 "bdev_delay_update_latency", 00:06:01.737 "bdev_zone_block_delete", 00:06:01.737 "bdev_zone_block_create", 00:06:01.737 "blobfs_create", 00:06:01.737 "blobfs_detect", 00:06:01.737 "blobfs_set_cache_size", 00:06:01.737 "bdev_xnvme_delete", 00:06:01.737 "bdev_xnvme_create", 00:06:01.737 "bdev_aio_delete", 00:06:01.737 "bdev_aio_rescan", 00:06:01.737 "bdev_aio_create", 00:06:01.737 "bdev_ftl_set_property", 00:06:01.737 "bdev_ftl_get_properties", 00:06:01.737 "bdev_ftl_get_stats", 00:06:01.737 "bdev_ftl_unmap", 00:06:01.737 "bdev_ftl_unload", 00:06:01.737 "bdev_ftl_delete", 00:06:01.737 "bdev_ftl_load", 00:06:01.737 "bdev_ftl_create", 00:06:01.737 "bdev_virtio_attach_controller", 00:06:01.737 "bdev_virtio_scsi_get_devices", 00:06:01.737 "bdev_virtio_detach_controller", 00:06:01.737 "bdev_virtio_blk_set_hotplug", 00:06:01.737 "bdev_iscsi_delete", 00:06:01.737 "bdev_iscsi_create", 00:06:01.737 "bdev_iscsi_set_options", 00:06:01.737 "accel_error_inject_error", 00:06:01.737 "ioat_scan_accel_module", 00:06:01.737 "dsa_scan_accel_module", 00:06:01.737 "iaa_scan_accel_module", 00:06:01.737 "keyring_file_remove_key", 00:06:01.737 "keyring_file_add_key", 00:06:01.737 "keyring_linux_set_options", 00:06:01.737 "fsdev_aio_delete", 00:06:01.737 "fsdev_aio_create", 00:06:01.737 "iscsi_get_histogram", 00:06:01.737 "iscsi_enable_histogram", 00:06:01.737 "iscsi_set_options", 00:06:01.737 "iscsi_get_auth_groups", 00:06:01.737 "iscsi_auth_group_remove_secret", 00:06:01.737 "iscsi_auth_group_add_secret", 00:06:01.737 "iscsi_delete_auth_group", 00:06:01.737 "iscsi_create_auth_group", 00:06:01.737 "iscsi_set_discovery_auth", 00:06:01.737 "iscsi_get_options", 00:06:01.737 "iscsi_target_node_request_logout", 00:06:01.737 "iscsi_target_node_set_redirect", 00:06:01.737 "iscsi_target_node_set_auth", 00:06:01.737 "iscsi_target_node_add_lun", 00:06:01.737 "iscsi_get_stats", 00:06:01.738 "iscsi_get_connections", 00:06:01.738 "iscsi_portal_group_set_auth", 00:06:01.738 "iscsi_start_portal_group", 00:06:01.738 "iscsi_delete_portal_group", 00:06:01.738 "iscsi_create_portal_group", 00:06:01.738 "iscsi_get_portal_groups", 00:06:01.738 "iscsi_delete_target_node", 00:06:01.738 "iscsi_target_node_remove_pg_ig_maps", 00:06:01.738 "iscsi_target_node_add_pg_ig_maps", 00:06:01.738 "iscsi_create_target_node", 00:06:01.738 "iscsi_get_target_nodes", 00:06:01.738 "iscsi_delete_initiator_group", 00:06:01.738 "iscsi_initiator_group_remove_initiators", 00:06:01.738 "iscsi_initiator_group_add_initiators", 00:06:01.738 "iscsi_create_initiator_group", 00:06:01.738 "iscsi_get_initiator_groups", 00:06:01.738 "nvmf_set_crdt", 00:06:01.738 "nvmf_set_config", 00:06:01.738 "nvmf_set_max_subsystems", 00:06:01.738 "nvmf_stop_mdns_prr", 00:06:01.738 "nvmf_publish_mdns_prr", 00:06:01.738 "nvmf_subsystem_get_listeners", 00:06:01.738 "nvmf_subsystem_get_qpairs", 00:06:01.738 "nvmf_subsystem_get_controllers", 00:06:01.738 "nvmf_get_stats", 00:06:01.738 "nvmf_get_transports", 00:06:01.738 "nvmf_create_transport", 00:06:01.738 "nvmf_get_targets", 00:06:01.738 "nvmf_delete_target", 00:06:01.738 "nvmf_create_target", 00:06:01.738 "nvmf_subsystem_allow_any_host", 00:06:01.738 "nvmf_subsystem_set_keys", 00:06:01.738 "nvmf_subsystem_remove_host", 00:06:01.738 "nvmf_subsystem_add_host", 00:06:01.738 "nvmf_ns_remove_host", 00:06:01.738 "nvmf_ns_add_host", 00:06:01.738 "nvmf_subsystem_remove_ns", 00:06:01.738 "nvmf_subsystem_set_ns_ana_group", 00:06:01.738 "nvmf_subsystem_add_ns", 00:06:01.738 "nvmf_subsystem_listener_set_ana_state", 00:06:01.738 "nvmf_discovery_get_referrals", 00:06:01.738 "nvmf_discovery_remove_referral", 00:06:01.738 "nvmf_discovery_add_referral", 00:06:01.738 "nvmf_subsystem_remove_listener", 00:06:01.738 "nvmf_subsystem_add_listener", 00:06:01.738 "nvmf_delete_subsystem", 00:06:01.738 "nvmf_create_subsystem", 00:06:01.738 "nvmf_get_subsystems", 00:06:01.738 "env_dpdk_get_mem_stats", 00:06:01.738 "nbd_get_disks", 00:06:01.738 "nbd_stop_disk", 00:06:01.738 "nbd_start_disk", 00:06:01.738 "ublk_recover_disk", 00:06:01.738 "ublk_get_disks", 00:06:01.738 "ublk_stop_disk", 00:06:01.738 "ublk_start_disk", 00:06:01.738 "ublk_destroy_target", 00:06:01.738 "ublk_create_target", 00:06:01.738 "virtio_blk_create_transport", 00:06:01.738 "virtio_blk_get_transports", 00:06:01.738 "vhost_controller_set_coalescing", 00:06:01.738 "vhost_get_controllers", 00:06:01.738 "vhost_delete_controller", 00:06:01.738 "vhost_create_blk_controller", 00:06:01.738 "vhost_scsi_controller_remove_target", 00:06:01.738 "vhost_scsi_controller_add_target", 00:06:01.738 "vhost_start_scsi_controller", 00:06:01.738 "vhost_create_scsi_controller", 00:06:01.738 "thread_set_cpumask", 00:06:01.738 "scheduler_set_options", 00:06:01.738 "framework_get_governor", 00:06:01.738 "framework_get_scheduler", 00:06:01.738 "framework_set_scheduler", 00:06:01.738 "framework_get_reactors", 00:06:01.738 "thread_get_io_channels", 00:06:01.738 "thread_get_pollers", 00:06:01.738 "thread_get_stats", 00:06:01.738 "framework_monitor_context_switch", 00:06:01.738 "spdk_kill_instance", 00:06:01.738 "log_enable_timestamps", 00:06:01.738 "log_get_flags", 00:06:01.738 "log_clear_flag", 00:06:01.738 "log_set_flag", 00:06:01.738 "log_get_level", 00:06:01.738 "log_set_level", 00:06:01.738 "log_get_print_level", 00:06:01.738 "log_set_print_level", 00:06:01.738 "framework_enable_cpumask_locks", 00:06:01.738 "framework_disable_cpumask_locks", 00:06:01.738 "framework_wait_init", 00:06:01.738 "framework_start_init", 00:06:01.738 "scsi_get_devices", 00:06:01.738 "bdev_get_histogram", 00:06:01.738 "bdev_enable_histogram", 00:06:01.738 "bdev_set_qos_limit", 00:06:01.738 "bdev_set_qd_sampling_period", 00:06:01.738 "bdev_get_bdevs", 00:06:01.738 "bdev_reset_iostat", 00:06:01.738 "bdev_get_iostat", 00:06:01.738 "bdev_examine", 00:06:01.738 "bdev_wait_for_examine", 00:06:01.738 "bdev_set_options", 00:06:01.738 "accel_get_stats", 00:06:01.738 "accel_set_options", 00:06:01.738 "accel_set_driver", 00:06:01.738 "accel_crypto_key_destroy", 00:06:01.738 "accel_crypto_keys_get", 00:06:01.738 "accel_crypto_key_create", 00:06:01.738 "accel_assign_opc", 00:06:01.738 "accel_get_module_info", 00:06:01.738 "accel_get_opc_assignments", 00:06:01.738 "vmd_rescan", 00:06:01.738 "vmd_remove_device", 00:06:01.738 "vmd_enable", 00:06:01.738 "sock_get_default_impl", 00:06:01.738 "sock_set_default_impl", 00:06:01.738 "sock_impl_set_options", 00:06:01.738 "sock_impl_get_options", 00:06:01.738 "iobuf_get_stats", 00:06:01.738 "iobuf_set_options", 00:06:01.738 "keyring_get_keys", 00:06:01.738 "framework_get_pci_devices", 00:06:01.738 "framework_get_config", 00:06:01.738 "framework_get_subsystems", 00:06:01.738 "fsdev_set_opts", 00:06:01.738 "fsdev_get_opts", 00:06:01.738 "trace_get_info", 00:06:01.738 "trace_get_tpoint_group_mask", 00:06:01.738 "trace_disable_tpoint_group", 00:06:01.738 "trace_enable_tpoint_group", 00:06:01.738 "trace_clear_tpoint_mask", 00:06:01.738 "trace_set_tpoint_mask", 00:06:01.738 "notify_get_notifications", 00:06:01.738 "notify_get_types", 00:06:01.738 "spdk_get_version", 00:06:01.738 "rpc_get_methods" 00:06:01.738 ] 00:06:01.738 06:39:54 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:01.738 06:39:54 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:01.738 06:39:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:01.738 06:39:54 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:01.738 06:39:54 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70149 00:06:01.738 06:39:54 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 70149 ']' 00:06:01.738 06:39:54 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 70149 00:06:01.738 06:39:54 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:01.738 06:39:54 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.738 06:39:54 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70149 00:06:01.738 killing process with pid 70149 00:06:01.738 06:39:54 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.738 06:39:54 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.738 06:39:54 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70149' 00:06:01.738 06:39:54 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 70149 00:06:01.738 06:39:54 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 70149 00:06:02.312 ************************************ 00:06:02.312 END TEST spdkcli_tcp 00:06:02.312 ************************************ 00:06:02.312 00:06:02.312 real 0m1.661s 00:06:02.312 user 0m2.947s 00:06:02.312 sys 0m0.429s 00:06:02.312 06:39:55 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.312 06:39:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:02.312 06:39:55 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:02.312 06:39:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.312 06:39:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.312 06:39:55 -- common/autotest_common.sh@10 -- # set +x 00:06:02.312 ************************************ 00:06:02.312 START TEST dpdk_mem_utility 00:06:02.312 ************************************ 00:06:02.312 06:39:55 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:02.312 * Looking for test storage... 00:06:02.312 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:02.312 06:39:55 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:02.312 06:39:55 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:02.312 06:39:55 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:02.312 06:39:55 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:02.312 06:39:55 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.312 06:39:55 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.312 06:39:55 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.312 06:39:55 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.312 06:39:55 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.312 06:39:55 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.312 06:39:55 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.312 06:39:55 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.312 06:39:55 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.312 06:39:55 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.312 06:39:55 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.312 06:39:55 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:02.312 06:39:55 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:02.312 06:39:55 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.312 06:39:55 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.312 06:39:55 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:02.312 06:39:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:02.312 06:39:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.312 06:39:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:02.312 06:39:55 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.312 06:39:55 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:02.312 06:39:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:02.312 06:39:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.312 06:39:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:02.312 06:39:55 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.312 06:39:55 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.312 06:39:55 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.312 06:39:55 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:02.312 06:39:55 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.312 06:39:55 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:02.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.312 --rc genhtml_branch_coverage=1 00:06:02.312 --rc genhtml_function_coverage=1 00:06:02.312 --rc genhtml_legend=1 00:06:02.312 --rc geninfo_all_blocks=1 00:06:02.312 --rc geninfo_unexecuted_blocks=1 00:06:02.312 00:06:02.312 ' 00:06:02.312 06:39:55 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:02.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.312 --rc genhtml_branch_coverage=1 00:06:02.312 --rc genhtml_function_coverage=1 00:06:02.312 --rc genhtml_legend=1 00:06:02.312 --rc geninfo_all_blocks=1 00:06:02.312 --rc geninfo_unexecuted_blocks=1 00:06:02.312 00:06:02.312 ' 00:06:02.312 06:39:55 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:02.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.312 --rc genhtml_branch_coverage=1 00:06:02.312 --rc genhtml_function_coverage=1 00:06:02.312 --rc genhtml_legend=1 00:06:02.312 --rc geninfo_all_blocks=1 00:06:02.312 --rc geninfo_unexecuted_blocks=1 00:06:02.312 00:06:02.312 ' 00:06:02.313 06:39:55 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:02.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.313 --rc genhtml_branch_coverage=1 00:06:02.313 --rc genhtml_function_coverage=1 00:06:02.313 --rc genhtml_legend=1 00:06:02.313 --rc geninfo_all_blocks=1 00:06:02.313 --rc geninfo_unexecuted_blocks=1 00:06:02.313 00:06:02.313 ' 00:06:02.313 06:39:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:02.313 06:39:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70249 00:06:02.313 06:39:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70249 00:06:02.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.313 06:39:55 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 70249 ']' 00:06:02.313 06:39:55 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.313 06:39:55 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.313 06:39:55 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.313 06:39:55 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.313 06:39:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:02.313 06:39:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.313 [2024-11-18 06:39:55.358707] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:02.313 [2024-11-18 06:39:55.359022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70249 ] 00:06:02.575 [2024-11-18 06:39:55.512555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.575 [2024-11-18 06:39:55.541425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.148 06:39:56 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.148 06:39:56 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:03.148 06:39:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:03.148 06:39:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:03.148 06:39:56 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.148 06:39:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:03.148 { 00:06:03.148 "filename": "/tmp/spdk_mem_dump.txt" 00:06:03.148 } 00:06:03.148 06:39:56 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.148 06:39:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:03.412 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:03.412 1 heaps totaling size 810.000000 MiB 00:06:03.412 size: 810.000000 MiB heap id: 0 00:06:03.412 end heaps---------- 00:06:03.412 9 mempools totaling size 595.772034 MiB 00:06:03.412 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:03.412 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:03.412 size: 92.545471 MiB name: bdev_io_70249 00:06:03.412 size: 50.003479 MiB name: msgpool_70249 00:06:03.412 size: 36.509338 MiB name: fsdev_io_70249 00:06:03.412 size: 21.763794 MiB name: PDU_Pool 00:06:03.412 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:03.412 size: 4.133484 MiB name: evtpool_70249 00:06:03.412 size: 0.026123 MiB name: Session_Pool 00:06:03.412 end mempools------- 00:06:03.412 6 memzones totaling size 4.142822 MiB 00:06:03.412 size: 1.000366 MiB name: RG_ring_0_70249 00:06:03.412 size: 1.000366 MiB name: RG_ring_1_70249 00:06:03.412 size: 1.000366 MiB name: RG_ring_4_70249 00:06:03.412 size: 1.000366 MiB name: RG_ring_5_70249 00:06:03.412 size: 0.125366 MiB name: RG_ring_2_70249 00:06:03.412 size: 0.015991 MiB name: RG_ring_3_70249 00:06:03.412 end memzones------- 00:06:03.412 06:39:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:03.412 heap id: 0 total size: 810.000000 MiB number of busy elements: 311 number of free elements: 15 00:06:03.412 list of free elements. size: 10.813599 MiB 00:06:03.412 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:03.412 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:03.412 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:03.412 element at address: 0x200000400000 with size: 0.993958 MiB 00:06:03.412 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:03.412 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:03.412 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:03.412 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:03.412 element at address: 0x20001a600000 with size: 0.568054 MiB 00:06:03.412 element at address: 0x20000a600000 with size: 0.488892 MiB 00:06:03.412 element at address: 0x200000c00000 with size: 0.487000 MiB 00:06:03.412 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:03.412 element at address: 0x200003e00000 with size: 0.480286 MiB 00:06:03.412 element at address: 0x200027a00000 with size: 0.395752 MiB 00:06:03.412 element at address: 0x200000800000 with size: 0.351746 MiB 00:06:03.412 list of standard malloc elements. size: 199.267517 MiB 00:06:03.412 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:03.412 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:03.412 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:03.412 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:03.412 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:03.412 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:03.413 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:03.413 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:03.413 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:03.413 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000085e580 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000087e840 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000087e900 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000087f080 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000087f140 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000087f200 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000087f380 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000087f440 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000087f500 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:03.413 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:03.413 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20001a691780 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20001a691840 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20001a691900 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:06:03.413 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a692080 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a692140 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a692200 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a692380 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a692440 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a692500 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a692680 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a692740 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a692800 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a692980 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a693040 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a693100 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a693280 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a693340 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a693400 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a693580 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a693640 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a693700 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a693880 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a693940 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a694000 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a694180 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a694240 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a694300 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a694480 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a694540 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a694600 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a694780 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a694840 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a694900 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a695080 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a695140 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a695200 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:03.414 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a65500 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a655c0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6c1c0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6c3c0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6c480 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6c540 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:06:03.414 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:03.415 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:03.415 list of memzone associated elements. size: 599.918884 MiB 00:06:03.415 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:03.415 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:03.415 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:03.415 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:03.415 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:03.415 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70249_0 00:06:03.415 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:03.415 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70249_0 00:06:03.415 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:03.415 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70249_0 00:06:03.415 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:03.415 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:03.415 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:03.415 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:03.415 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:03.415 associated memzone info: size: 3.000122 MiB name: MP_evtpool_70249_0 00:06:03.415 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:03.415 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70249 00:06:03.415 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:03.415 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70249 00:06:03.415 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:03.415 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:03.415 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:03.415 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:03.415 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:03.415 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:03.415 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:03.415 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:03.415 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:03.415 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70249 00:06:03.415 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:03.415 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70249 00:06:03.415 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:03.415 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70249 00:06:03.415 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:03.415 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70249 00:06:03.415 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:03.415 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70249 00:06:03.415 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:03.415 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70249 00:06:03.415 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:03.415 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:03.415 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:03.415 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:03.415 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:03.415 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:03.415 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:03.415 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_70249 00:06:03.415 element at address: 0x20000085e640 with size: 0.125488 MiB 00:06:03.415 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70249 00:06:03.415 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:03.415 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:03.415 element at address: 0x200027a65680 with size: 0.023743 MiB 00:06:03.415 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:03.415 element at address: 0x20000085a380 with size: 0.016113 MiB 00:06:03.415 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70249 00:06:03.415 element at address: 0x200027a6b7c0 with size: 0.002441 MiB 00:06:03.415 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:03.415 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:06:03.415 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70249 00:06:03.415 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:03.415 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70249 00:06:03.415 element at address: 0x20000085a180 with size: 0.000305 MiB 00:06:03.415 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70249 00:06:03.415 element at address: 0x200027a6c280 with size: 0.000305 MiB 00:06:03.415 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:03.415 06:39:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:03.415 06:39:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70249 00:06:03.415 06:39:56 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 70249 ']' 00:06:03.415 06:39:56 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 70249 00:06:03.415 06:39:56 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:03.415 06:39:56 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.415 06:39:56 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70249 00:06:03.415 killing process with pid 70249 00:06:03.415 06:39:56 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.415 06:39:56 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.415 06:39:56 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70249' 00:06:03.415 06:39:56 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 70249 00:06:03.415 06:39:56 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 70249 00:06:03.676 00:06:03.676 real 0m1.492s 00:06:03.676 user 0m1.521s 00:06:03.676 sys 0m0.399s 00:06:03.676 06:39:56 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.676 06:39:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:03.676 ************************************ 00:06:03.676 END TEST dpdk_mem_utility 00:06:03.676 ************************************ 00:06:03.676 06:39:56 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:03.676 06:39:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.676 06:39:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.676 06:39:56 -- common/autotest_common.sh@10 -- # set +x 00:06:03.676 ************************************ 00:06:03.676 START TEST event 00:06:03.676 ************************************ 00:06:03.676 06:39:56 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:03.938 * Looking for test storage... 00:06:03.938 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:03.938 06:39:56 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:03.938 06:39:56 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:03.938 06:39:56 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:03.938 06:39:56 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:03.938 06:39:56 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.938 06:39:56 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.938 06:39:56 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.938 06:39:56 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.938 06:39:56 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.938 06:39:56 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.938 06:39:56 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.938 06:39:56 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.938 06:39:56 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.938 06:39:56 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.938 06:39:56 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.938 06:39:56 event -- scripts/common.sh@344 -- # case "$op" in 00:06:03.938 06:39:56 event -- scripts/common.sh@345 -- # : 1 00:06:03.938 06:39:56 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.938 06:39:56 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.938 06:39:56 event -- scripts/common.sh@365 -- # decimal 1 00:06:03.938 06:39:56 event -- scripts/common.sh@353 -- # local d=1 00:06:03.938 06:39:56 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.938 06:39:56 event -- scripts/common.sh@355 -- # echo 1 00:06:03.938 06:39:56 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.938 06:39:56 event -- scripts/common.sh@366 -- # decimal 2 00:06:03.938 06:39:56 event -- scripts/common.sh@353 -- # local d=2 00:06:03.938 06:39:56 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.938 06:39:56 event -- scripts/common.sh@355 -- # echo 2 00:06:03.938 06:39:56 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.938 06:39:56 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.938 06:39:56 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.938 06:39:56 event -- scripts/common.sh@368 -- # return 0 00:06:03.938 06:39:56 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.938 06:39:56 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:03.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.938 --rc genhtml_branch_coverage=1 00:06:03.938 --rc genhtml_function_coverage=1 00:06:03.938 --rc genhtml_legend=1 00:06:03.938 --rc geninfo_all_blocks=1 00:06:03.938 --rc geninfo_unexecuted_blocks=1 00:06:03.938 00:06:03.938 ' 00:06:03.938 06:39:56 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:03.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.938 --rc genhtml_branch_coverage=1 00:06:03.938 --rc genhtml_function_coverage=1 00:06:03.938 --rc genhtml_legend=1 00:06:03.938 --rc geninfo_all_blocks=1 00:06:03.938 --rc geninfo_unexecuted_blocks=1 00:06:03.938 00:06:03.938 ' 00:06:03.938 06:39:56 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:03.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.938 --rc genhtml_branch_coverage=1 00:06:03.938 --rc genhtml_function_coverage=1 00:06:03.938 --rc genhtml_legend=1 00:06:03.938 --rc geninfo_all_blocks=1 00:06:03.938 --rc geninfo_unexecuted_blocks=1 00:06:03.938 00:06:03.938 ' 00:06:03.938 06:39:56 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:03.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.938 --rc genhtml_branch_coverage=1 00:06:03.938 --rc genhtml_function_coverage=1 00:06:03.938 --rc genhtml_legend=1 00:06:03.938 --rc geninfo_all_blocks=1 00:06:03.938 --rc geninfo_unexecuted_blocks=1 00:06:03.938 00:06:03.938 ' 00:06:03.938 06:39:56 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:03.938 06:39:56 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:03.938 06:39:56 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:03.938 06:39:56 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:03.938 06:39:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.938 06:39:56 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.938 ************************************ 00:06:03.938 START TEST event_perf 00:06:03.938 ************************************ 00:06:03.938 06:39:56 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:03.938 Running I/O for 1 seconds...[2024-11-18 06:39:56.881992] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:03.938 [2024-11-18 06:39:56.882269] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70329 ] 00:06:04.200 [2024-11-18 06:39:57.043356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:04.200 [2024-11-18 06:39:57.076189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.200 [2024-11-18 06:39:57.076794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.200 Running I/O for 1 seconds...[2024-11-18 06:39:57.076846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.200 [2024-11-18 06:39:57.076396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.146 00:06:05.146 lcore 0: 140089 00:06:05.146 lcore 1: 140092 00:06:05.146 lcore 2: 140094 00:06:05.146 lcore 3: 140091 00:06:05.146 done. 00:06:05.146 00:06:05.146 real 0m1.288s 00:06:05.146 user 0m4.081s 00:06:05.146 sys 0m0.083s 00:06:05.146 06:39:58 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.146 06:39:58 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:05.146 ************************************ 00:06:05.146 END TEST event_perf 00:06:05.146 ************************************ 00:06:05.146 06:39:58 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:05.146 06:39:58 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:05.146 06:39:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.146 06:39:58 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.146 ************************************ 00:06:05.146 START TEST event_reactor 00:06:05.146 ************************************ 00:06:05.146 06:39:58 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:05.407 [2024-11-18 06:39:58.236994] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:05.408 [2024-11-18 06:39:58.237157] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70363 ] 00:06:05.408 [2024-11-18 06:39:58.396744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.408 [2024-11-18 06:39:58.424508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.795 test_start 00:06:06.795 oneshot 00:06:06.795 tick 100 00:06:06.795 tick 100 00:06:06.795 tick 250 00:06:06.795 tick 100 00:06:06.795 tick 100 00:06:06.795 tick 100 00:06:06.795 tick 250 00:06:06.795 tick 500 00:06:06.795 tick 100 00:06:06.795 tick 100 00:06:06.795 tick 250 00:06:06.795 tick 100 00:06:06.795 tick 100 00:06:06.795 test_end 00:06:06.795 00:06:06.795 real 0m1.269s 00:06:06.795 user 0m1.087s 00:06:06.795 sys 0m0.071s 00:06:06.795 ************************************ 00:06:06.795 END TEST event_reactor 00:06:06.795 ************************************ 00:06:06.795 06:39:59 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.795 06:39:59 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:06.795 06:39:59 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:06.795 06:39:59 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:06.795 06:39:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.795 06:39:59 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.795 ************************************ 00:06:06.795 START TEST event_reactor_perf 00:06:06.795 ************************************ 00:06:06.795 06:39:59 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:06.795 [2024-11-18 06:39:59.568844] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:06.795 [2024-11-18 06:39:59.569273] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70400 ] 00:06:06.795 [2024-11-18 06:39:59.730007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.795 [2024-11-18 06:39:59.757934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.741 test_start 00:06:07.741 test_end 00:06:07.741 Performance: 309259 events per second 00:06:07.741 00:06:07.741 real 0m1.276s 00:06:07.741 user 0m1.094s 00:06:07.741 sys 0m0.071s 00:06:07.741 06:40:00 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.741 ************************************ 00:06:07.741 END TEST event_reactor_perf 00:06:07.741 ************************************ 00:06:07.741 06:40:00 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:08.003 06:40:00 event -- event/event.sh@49 -- # uname -s 00:06:08.003 06:40:00 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:08.003 06:40:00 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:08.003 06:40:00 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.003 06:40:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.003 06:40:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.003 ************************************ 00:06:08.003 START TEST event_scheduler 00:06:08.003 ************************************ 00:06:08.003 06:40:00 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:08.003 * Looking for test storage... 00:06:08.003 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:08.003 06:40:00 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:08.003 06:40:00 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:08.003 06:40:00 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:08.003 06:40:01 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:08.003 06:40:01 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.003 06:40:01 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.003 06:40:01 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.003 06:40:01 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.003 06:40:01 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.003 06:40:01 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.003 06:40:01 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.003 06:40:01 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.003 06:40:01 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.003 06:40:01 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.003 06:40:01 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.003 06:40:01 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:08.003 06:40:01 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:08.003 06:40:01 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.003 06:40:01 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.003 06:40:01 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:08.003 06:40:01 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:08.003 06:40:01 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.003 06:40:01 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:08.003 06:40:01 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.003 06:40:01 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:08.003 06:40:01 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:08.003 06:40:01 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.003 06:40:01 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:08.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.003 06:40:01 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.003 06:40:01 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.003 06:40:01 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.003 06:40:01 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:08.003 06:40:01 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.003 06:40:01 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:08.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.003 --rc genhtml_branch_coverage=1 00:06:08.003 --rc genhtml_function_coverage=1 00:06:08.003 --rc genhtml_legend=1 00:06:08.003 --rc geninfo_all_blocks=1 00:06:08.003 --rc geninfo_unexecuted_blocks=1 00:06:08.003 00:06:08.003 ' 00:06:08.003 06:40:01 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:08.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.003 --rc genhtml_branch_coverage=1 00:06:08.003 --rc genhtml_function_coverage=1 00:06:08.003 --rc genhtml_legend=1 00:06:08.003 --rc geninfo_all_blocks=1 00:06:08.003 --rc geninfo_unexecuted_blocks=1 00:06:08.003 00:06:08.003 ' 00:06:08.003 06:40:01 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:08.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.003 --rc genhtml_branch_coverage=1 00:06:08.003 --rc genhtml_function_coverage=1 00:06:08.003 --rc genhtml_legend=1 00:06:08.003 --rc geninfo_all_blocks=1 00:06:08.003 --rc geninfo_unexecuted_blocks=1 00:06:08.003 00:06:08.003 ' 00:06:08.003 06:40:01 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:08.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.003 --rc genhtml_branch_coverage=1 00:06:08.003 --rc genhtml_function_coverage=1 00:06:08.003 --rc genhtml_legend=1 00:06:08.003 --rc geninfo_all_blocks=1 00:06:08.003 --rc geninfo_unexecuted_blocks=1 00:06:08.003 00:06:08.003 ' 00:06:08.003 06:40:01 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:08.003 06:40:01 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70470 00:06:08.003 06:40:01 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:08.003 06:40:01 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70470 00:06:08.003 06:40:01 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 70470 ']' 00:06:08.003 06:40:01 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.003 06:40:01 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.003 06:40:01 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.003 06:40:01 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.003 06:40:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:08.003 06:40:01 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:08.264 [2024-11-18 06:40:01.114568] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:08.264 [2024-11-18 06:40:01.114708] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70470 ] 00:06:08.264 [2024-11-18 06:40:01.274736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:08.264 [2024-11-18 06:40:01.309203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.264 [2024-11-18 06:40:01.309461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.264 [2024-11-18 06:40:01.309866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.264 [2024-11-18 06:40:01.309946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.209 06:40:01 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.209 06:40:01 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:09.209 06:40:01 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:09.209 06:40:01 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.209 06:40:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:09.209 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:09.209 POWER: Cannot set governor of lcore 0 to userspace 00:06:09.209 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:09.209 POWER: Cannot set governor of lcore 0 to performance 00:06:09.209 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:09.209 POWER: Cannot set governor of lcore 0 to userspace 00:06:09.209 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:09.209 POWER: Cannot set governor of lcore 0 to userspace 00:06:09.209 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:09.209 POWER: Unable to set Power Management Environment for lcore 0 00:06:09.209 [2024-11-18 06:40:01.991463] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:09.209 [2024-11-18 06:40:01.991504] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:09.209 [2024-11-18 06:40:01.991514] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:09.209 [2024-11-18 06:40:01.991592] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:09.209 [2024-11-18 06:40:01.991601] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:09.209 [2024-11-18 06:40:01.991625] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:09.209 06:40:01 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.209 06:40:01 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:09.209 06:40:01 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.209 06:40:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:09.209 [2024-11-18 06:40:02.070196] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:09.209 06:40:02 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.209 06:40:02 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:09.209 06:40:02 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.209 06:40:02 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.209 06:40:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:09.209 ************************************ 00:06:09.209 START TEST scheduler_create_thread 00:06:09.209 ************************************ 00:06:09.209 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:09.209 06:40:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:09.209 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.209 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.209 2 00:06:09.209 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.209 06:40:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:09.209 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.209 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.209 3 00:06:09.209 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.209 06:40:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.210 4 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.210 5 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.210 6 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.210 7 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.210 8 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.210 9 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.210 10 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.210 06:40:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.597 06:40:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.597 06:40:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:10.597 06:40:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:10.597 06:40:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.597 06:40:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.054 ************************************ 00:06:12.054 END TEST scheduler_create_thread 00:06:12.054 ************************************ 00:06:12.054 06:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.054 00:06:12.054 real 0m2.610s 00:06:12.054 user 0m0.015s 00:06:12.054 sys 0m0.006s 00:06:12.054 06:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.054 06:40:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.054 06:40:04 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:12.054 06:40:04 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70470 00:06:12.054 06:40:04 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 70470 ']' 00:06:12.054 06:40:04 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 70470 00:06:12.054 06:40:04 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:12.054 06:40:04 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.054 06:40:04 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70470 00:06:12.054 killing process with pid 70470 00:06:12.054 06:40:04 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:12.054 06:40:04 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:12.054 06:40:04 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70470' 00:06:12.054 06:40:04 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 70470 00:06:12.054 06:40:04 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 70470 00:06:12.316 [2024-11-18 06:40:05.177509] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:12.316 00:06:12.316 real 0m4.423s 00:06:12.316 user 0m8.141s 00:06:12.316 sys 0m0.358s 00:06:12.316 06:40:05 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.316 ************************************ 00:06:12.316 END TEST event_scheduler 00:06:12.316 ************************************ 00:06:12.316 06:40:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.316 06:40:05 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:12.316 06:40:05 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:12.316 06:40:05 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.316 06:40:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.316 06:40:05 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.316 ************************************ 00:06:12.316 START TEST app_repeat 00:06:12.316 ************************************ 00:06:12.316 06:40:05 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:12.316 06:40:05 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.316 06:40:05 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.316 06:40:05 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:12.316 06:40:05 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.316 06:40:05 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:12.316 06:40:05 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:12.316 06:40:05 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:12.316 06:40:05 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70565 00:06:12.316 06:40:05 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.316 Process app_repeat pid: 70565 00:06:12.316 06:40:05 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70565' 00:06:12.316 spdk_app_start Round 0 00:06:12.316 06:40:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:12.316 06:40:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:12.316 06:40:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70565 /var/tmp/spdk-nbd.sock 00:06:12.316 06:40:05 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:12.316 06:40:05 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70565 ']' 00:06:12.316 06:40:05 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:12.316 06:40:05 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:12.316 06:40:05 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:12.316 06:40:05 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.316 06:40:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:12.577 [2024-11-18 06:40:05.414749] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:12.577 [2024-11-18 06:40:05.414838] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70565 ] 00:06:12.577 [2024-11-18 06:40:05.566348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.577 [2024-11-18 06:40:05.592350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.577 [2024-11-18 06:40:05.592455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.519 06:40:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.519 06:40:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:13.519 06:40:06 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.519 Malloc0 00:06:13.519 06:40:06 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.780 Malloc1 00:06:13.780 06:40:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.780 06:40:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.780 06:40:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.780 06:40:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:13.780 06:40:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.780 06:40:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:13.780 06:40:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.780 06:40:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.780 06:40:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.780 06:40:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:13.780 06:40:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.780 06:40:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:13.780 06:40:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:13.780 06:40:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:13.780 06:40:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.780 06:40:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:14.041 /dev/nbd0 00:06:14.041 06:40:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:14.041 06:40:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:14.041 06:40:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:14.041 06:40:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:14.041 06:40:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:14.041 06:40:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:14.041 06:40:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:14.041 06:40:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:14.041 06:40:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:14.041 06:40:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:14.041 06:40:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.041 1+0 records in 00:06:14.041 1+0 records out 00:06:14.041 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301737 s, 13.6 MB/s 00:06:14.041 06:40:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.041 06:40:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:14.041 06:40:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.041 06:40:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:14.041 06:40:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:14.041 06:40:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.041 06:40:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.041 06:40:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:14.301 /dev/nbd1 00:06:14.301 06:40:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:14.301 06:40:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:14.301 06:40:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:14.301 06:40:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:14.301 06:40:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:14.301 06:40:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:14.301 06:40:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:14.301 06:40:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:14.301 06:40:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:14.301 06:40:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:14.301 06:40:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.301 1+0 records in 00:06:14.301 1+0 records out 00:06:14.301 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000493313 s, 8.3 MB/s 00:06:14.302 06:40:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.302 06:40:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:14.302 06:40:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.302 06:40:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:14.302 06:40:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:14.302 06:40:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.302 06:40:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.302 06:40:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.302 06:40:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.302 06:40:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:14.561 { 00:06:14.561 "nbd_device": "/dev/nbd0", 00:06:14.561 "bdev_name": "Malloc0" 00:06:14.561 }, 00:06:14.561 { 00:06:14.561 "nbd_device": "/dev/nbd1", 00:06:14.561 "bdev_name": "Malloc1" 00:06:14.561 } 00:06:14.561 ]' 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:14.561 { 00:06:14.561 "nbd_device": "/dev/nbd0", 00:06:14.561 "bdev_name": "Malloc0" 00:06:14.561 }, 00:06:14.561 { 00:06:14.561 "nbd_device": "/dev/nbd1", 00:06:14.561 "bdev_name": "Malloc1" 00:06:14.561 } 00:06:14.561 ]' 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:14.561 /dev/nbd1' 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:14.561 /dev/nbd1' 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:14.561 256+0 records in 00:06:14.561 256+0 records out 00:06:14.561 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00436854 s, 240 MB/s 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:14.561 256+0 records in 00:06:14.561 256+0 records out 00:06:14.561 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177265 s, 59.2 MB/s 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:14.561 256+0 records in 00:06:14.561 256+0 records out 00:06:14.561 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177712 s, 59.0 MB/s 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.561 06:40:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:14.821 06:40:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:14.821 06:40:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:14.821 06:40:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:14.821 06:40:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.821 06:40:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.821 06:40:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:14.821 06:40:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.821 06:40:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.821 06:40:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.821 06:40:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:15.081 06:40:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:15.081 06:40:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:15.081 06:40:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:15.081 06:40:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.081 06:40:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.081 06:40:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:15.081 06:40:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:15.081 06:40:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.081 06:40:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.081 06:40:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.081 06:40:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.342 06:40:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:15.342 06:40:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:15.342 06:40:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.342 06:40:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:15.342 06:40:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.342 06:40:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:15.342 06:40:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:15.342 06:40:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:15.342 06:40:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:15.342 06:40:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:15.342 06:40:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:15.342 06:40:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:15.342 06:40:08 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:15.342 06:40:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:15.603 [2024-11-18 06:40:08.499274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.603 [2024-11-18 06:40:08.515955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.603 [2024-11-18 06:40:08.515957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.603 [2024-11-18 06:40:08.544990] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:15.603 [2024-11-18 06:40:08.545044] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:18.899 spdk_app_start Round 1 00:06:18.900 06:40:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:18.900 06:40:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:18.900 06:40:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70565 /var/tmp/spdk-nbd.sock 00:06:18.900 06:40:11 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70565 ']' 00:06:18.900 06:40:11 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:18.900 06:40:11 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:18.900 06:40:11 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:18.900 06:40:11 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.900 06:40:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.900 06:40:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.900 06:40:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:18.900 06:40:11 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.900 Malloc0 00:06:18.900 06:40:11 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.900 Malloc1 00:06:18.900 06:40:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.900 06:40:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.900 06:40:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.900 06:40:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:18.900 06:40:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.900 06:40:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:18.900 06:40:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.900 06:40:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.900 06:40:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.900 06:40:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:18.900 06:40:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.900 06:40:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:18.900 06:40:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:18.900 06:40:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:18.900 06:40:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.900 06:40:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:19.157 /dev/nbd0 00:06:19.157 06:40:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.157 06:40:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.157 06:40:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:19.157 06:40:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:19.157 06:40:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:19.157 06:40:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:19.157 06:40:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:19.157 06:40:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:19.157 06:40:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:19.157 06:40:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:19.157 06:40:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.157 1+0 records in 00:06:19.157 1+0 records out 00:06:19.157 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026271 s, 15.6 MB/s 00:06:19.157 06:40:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.157 06:40:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:19.157 06:40:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.157 06:40:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:19.157 06:40:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:19.157 06:40:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.157 06:40:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.157 06:40:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:19.417 /dev/nbd1 00:06:19.417 06:40:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:19.417 06:40:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:19.417 06:40:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:19.417 06:40:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:19.417 06:40:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:19.417 06:40:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:19.417 06:40:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:19.417 06:40:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:19.417 06:40:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:19.417 06:40:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:19.417 06:40:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.417 1+0 records in 00:06:19.417 1+0 records out 00:06:19.417 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250062 s, 16.4 MB/s 00:06:19.417 06:40:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.417 06:40:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:19.417 06:40:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.417 06:40:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:19.417 06:40:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:19.417 06:40:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.417 06:40:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.417 06:40:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.417 06:40:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.417 06:40:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.676 06:40:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:19.676 { 00:06:19.676 "nbd_device": "/dev/nbd0", 00:06:19.676 "bdev_name": "Malloc0" 00:06:19.676 }, 00:06:19.676 { 00:06:19.676 "nbd_device": "/dev/nbd1", 00:06:19.676 "bdev_name": "Malloc1" 00:06:19.676 } 00:06:19.676 ]' 00:06:19.676 06:40:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:19.676 { 00:06:19.676 "nbd_device": "/dev/nbd0", 00:06:19.676 "bdev_name": "Malloc0" 00:06:19.676 }, 00:06:19.676 { 00:06:19.676 "nbd_device": "/dev/nbd1", 00:06:19.676 "bdev_name": "Malloc1" 00:06:19.676 } 00:06:19.676 ]' 00:06:19.676 06:40:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.676 06:40:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:19.676 /dev/nbd1' 00:06:19.676 06:40:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.676 06:40:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:19.676 /dev/nbd1' 00:06:19.676 06:40:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:19.676 06:40:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:19.676 06:40:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:19.676 06:40:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:19.676 06:40:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:19.676 06:40:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.676 06:40:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.676 06:40:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:19.676 06:40:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:19.676 06:40:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:19.676 06:40:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:19.676 256+0 records in 00:06:19.676 256+0 records out 00:06:19.676 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0080039 s, 131 MB/s 00:06:19.676 06:40:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.676 06:40:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:19.676 256+0 records in 00:06:19.676 256+0 records out 00:06:19.677 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208519 s, 50.3 MB/s 00:06:19.677 06:40:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.677 06:40:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:19.677 256+0 records in 00:06:19.677 256+0 records out 00:06:19.677 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149406 s, 70.2 MB/s 00:06:19.677 06:40:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:19.677 06:40:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.677 06:40:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.677 06:40:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:19.677 06:40:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:19.677 06:40:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:19.677 06:40:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:19.677 06:40:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.677 06:40:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:19.677 06:40:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.677 06:40:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:19.677 06:40:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:19.677 06:40:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:19.677 06:40:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.677 06:40:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.677 06:40:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.677 06:40:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:19.677 06:40:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.677 06:40:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:19.937 06:40:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:19.937 06:40:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:19.937 06:40:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:19.937 06:40:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.937 06:40:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.937 06:40:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:19.937 06:40:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:19.937 06:40:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.937 06:40:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.937 06:40:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.200 06:40:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.200 06:40:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.200 06:40:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.200 06:40:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.200 06:40:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.200 06:40:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.200 06:40:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.200 06:40:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.200 06:40:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.200 06:40:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.200 06:40:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.460 06:40:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:20.460 06:40:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:20.460 06:40:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.460 06:40:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:20.460 06:40:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:20.460 06:40:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.461 06:40:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:20.461 06:40:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:20.461 06:40:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:20.461 06:40:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:20.461 06:40:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:20.461 06:40:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:20.461 06:40:13 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:20.719 06:40:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:20.719 [2024-11-18 06:40:13.657362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:20.719 [2024-11-18 06:40:13.673430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.719 [2024-11-18 06:40:13.673432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.719 [2024-11-18 06:40:13.702554] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:20.719 [2024-11-18 06:40:13.702599] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:24.021 06:40:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:24.021 spdk_app_start Round 2 00:06:24.021 06:40:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:24.021 06:40:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70565 /var/tmp/spdk-nbd.sock 00:06:24.021 06:40:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70565 ']' 00:06:24.021 06:40:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:24.021 06:40:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:24.021 06:40:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:24.021 06:40:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.021 06:40:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:24.021 06:40:16 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.021 06:40:16 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:24.021 06:40:16 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.021 Malloc0 00:06:24.021 06:40:16 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.283 Malloc1 00:06:24.283 06:40:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.283 06:40:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.283 06:40:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.283 06:40:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:24.283 06:40:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.283 06:40:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:24.283 06:40:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.283 06:40:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.283 06:40:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.283 06:40:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:24.283 06:40:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.283 06:40:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:24.283 06:40:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:24.283 06:40:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:24.283 06:40:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.283 06:40:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:24.543 /dev/nbd0 00:06:24.543 06:40:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:24.543 06:40:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:24.543 06:40:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:24.543 06:40:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:24.543 06:40:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:24.543 06:40:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:24.543 06:40:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:24.543 06:40:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:24.543 06:40:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:24.543 06:40:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:24.543 06:40:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.543 1+0 records in 00:06:24.543 1+0 records out 00:06:24.543 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000148137 s, 27.7 MB/s 00:06:24.543 06:40:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.543 06:40:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:24.543 06:40:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.543 06:40:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:24.543 06:40:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:24.543 06:40:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.543 06:40:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.543 06:40:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:24.804 /dev/nbd1 00:06:24.804 06:40:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:24.804 06:40:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:24.804 06:40:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:24.804 06:40:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:24.804 06:40:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:24.804 06:40:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:24.804 06:40:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:24.804 06:40:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:24.804 06:40:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:24.804 06:40:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:24.804 06:40:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.804 1+0 records in 00:06:24.804 1+0 records out 00:06:24.804 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196081 s, 20.9 MB/s 00:06:24.804 06:40:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.804 06:40:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:24.804 06:40:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.804 06:40:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:24.804 06:40:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:24.804 06:40:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.804 06:40:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.804 06:40:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.804 06:40:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.804 06:40:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.804 06:40:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:24.805 { 00:06:24.805 "nbd_device": "/dev/nbd0", 00:06:24.805 "bdev_name": "Malloc0" 00:06:24.805 }, 00:06:24.805 { 00:06:24.805 "nbd_device": "/dev/nbd1", 00:06:24.805 "bdev_name": "Malloc1" 00:06:24.805 } 00:06:24.805 ]' 00:06:24.805 06:40:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:24.805 { 00:06:24.805 "nbd_device": "/dev/nbd0", 00:06:24.805 "bdev_name": "Malloc0" 00:06:24.805 }, 00:06:24.805 { 00:06:24.805 "nbd_device": "/dev/nbd1", 00:06:24.805 "bdev_name": "Malloc1" 00:06:24.805 } 00:06:24.805 ]' 00:06:24.805 06:40:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.066 06:40:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:25.066 /dev/nbd1' 00:06:25.066 06:40:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:25.066 /dev/nbd1' 00:06:25.066 06:40:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.066 06:40:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:25.066 06:40:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:25.066 06:40:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:25.066 06:40:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:25.066 06:40:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:25.066 06:40:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.066 06:40:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.066 06:40:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:25.066 06:40:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:25.066 06:40:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:25.066 06:40:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:25.066 256+0 records in 00:06:25.066 256+0 records out 00:06:25.066 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0089543 s, 117 MB/s 00:06:25.066 06:40:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.066 06:40:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:25.066 256+0 records in 00:06:25.066 256+0 records out 00:06:25.066 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0184736 s, 56.8 MB/s 00:06:25.066 06:40:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.066 06:40:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:25.066 256+0 records in 00:06:25.066 256+0 records out 00:06:25.066 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0157709 s, 66.5 MB/s 00:06:25.066 06:40:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:25.066 06:40:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.066 06:40:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.066 06:40:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:25.066 06:40:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:25.066 06:40:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:25.066 06:40:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:25.066 06:40:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.066 06:40:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:25.067 06:40:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.067 06:40:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:25.067 06:40:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:25.067 06:40:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:25.067 06:40:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.067 06:40:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.067 06:40:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:25.067 06:40:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:25.067 06:40:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.067 06:40:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:25.328 06:40:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:25.328 06:40:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:25.328 06:40:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:25.328 06:40:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.328 06:40:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.328 06:40:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:25.328 06:40:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.328 06:40:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.328 06:40:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.328 06:40:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:25.328 06:40:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:25.328 06:40:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:25.328 06:40:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:25.328 06:40:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.328 06:40:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.328 06:40:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:25.328 06:40:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.328 06:40:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.328 06:40:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.328 06:40:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.328 06:40:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.598 06:40:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:25.598 06:40:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:25.598 06:40:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.598 06:40:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:25.598 06:40:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:25.598 06:40:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.598 06:40:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:25.598 06:40:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:25.598 06:40:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:25.598 06:40:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:25.598 06:40:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:25.598 06:40:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:25.598 06:40:18 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:25.887 06:40:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:25.888 [2024-11-18 06:40:18.923655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.888 [2024-11-18 06:40:18.939771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.888 [2024-11-18 06:40:18.939773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.888 [2024-11-18 06:40:18.968835] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:25.888 [2024-11-18 06:40:18.968882] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:29.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:29.185 06:40:21 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70565 /var/tmp/spdk-nbd.sock 00:06:29.185 06:40:21 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70565 ']' 00:06:29.185 06:40:21 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:29.185 06:40:21 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.185 06:40:21 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:29.185 06:40:21 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.185 06:40:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:29.185 06:40:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.185 06:40:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:29.185 06:40:22 event.app_repeat -- event/event.sh@39 -- # killprocess 70565 00:06:29.185 06:40:22 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 70565 ']' 00:06:29.185 06:40:22 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 70565 00:06:29.185 06:40:22 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:29.185 06:40:22 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.185 06:40:22 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70565 00:06:29.185 killing process with pid 70565 00:06:29.185 06:40:22 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.185 06:40:22 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.185 06:40:22 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70565' 00:06:29.185 06:40:22 event.app_repeat -- common/autotest_common.sh@973 -- # kill 70565 00:06:29.185 06:40:22 event.app_repeat -- common/autotest_common.sh@978 -- # wait 70565 00:06:29.185 spdk_app_start is called in Round 0. 00:06:29.185 Shutdown signal received, stop current app iteration 00:06:29.185 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 reinitialization... 00:06:29.185 spdk_app_start is called in Round 1. 00:06:29.185 Shutdown signal received, stop current app iteration 00:06:29.185 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 reinitialization... 00:06:29.185 spdk_app_start is called in Round 2. 00:06:29.185 Shutdown signal received, stop current app iteration 00:06:29.185 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 reinitialization... 00:06:29.185 spdk_app_start is called in Round 3. 00:06:29.185 Shutdown signal received, stop current app iteration 00:06:29.185 06:40:22 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:29.185 ************************************ 00:06:29.185 END TEST app_repeat 00:06:29.185 ************************************ 00:06:29.185 06:40:22 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:29.185 00:06:29.185 real 0m16.814s 00:06:29.185 user 0m37.622s 00:06:29.185 sys 0m2.049s 00:06:29.185 06:40:22 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.185 06:40:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:29.185 06:40:22 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:29.185 06:40:22 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:29.185 06:40:22 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.185 06:40:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.185 06:40:22 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.185 ************************************ 00:06:29.185 START TEST cpu_locks 00:06:29.185 ************************************ 00:06:29.185 06:40:22 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:29.444 * Looking for test storage... 00:06:29.444 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:29.444 06:40:22 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:29.444 06:40:22 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:29.444 06:40:22 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:29.444 06:40:22 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:29.444 06:40:22 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.444 06:40:22 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.444 06:40:22 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.444 06:40:22 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.444 06:40:22 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.444 06:40:22 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.444 06:40:22 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.444 06:40:22 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.444 06:40:22 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.444 06:40:22 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.444 06:40:22 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.444 06:40:22 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:29.444 06:40:22 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:29.444 06:40:22 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.444 06:40:22 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.444 06:40:22 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:29.444 06:40:22 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:29.444 06:40:22 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.444 06:40:22 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:29.444 06:40:22 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.444 06:40:22 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:29.444 06:40:22 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:29.444 06:40:22 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.444 06:40:22 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:29.444 06:40:22 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.444 06:40:22 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.444 06:40:22 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.444 06:40:22 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:29.444 06:40:22 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.444 06:40:22 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:29.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.444 --rc genhtml_branch_coverage=1 00:06:29.444 --rc genhtml_function_coverage=1 00:06:29.444 --rc genhtml_legend=1 00:06:29.444 --rc geninfo_all_blocks=1 00:06:29.444 --rc geninfo_unexecuted_blocks=1 00:06:29.444 00:06:29.444 ' 00:06:29.444 06:40:22 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:29.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.444 --rc genhtml_branch_coverage=1 00:06:29.444 --rc genhtml_function_coverage=1 00:06:29.444 --rc genhtml_legend=1 00:06:29.444 --rc geninfo_all_blocks=1 00:06:29.444 --rc geninfo_unexecuted_blocks=1 00:06:29.444 00:06:29.444 ' 00:06:29.444 06:40:22 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:29.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.444 --rc genhtml_branch_coverage=1 00:06:29.444 --rc genhtml_function_coverage=1 00:06:29.444 --rc genhtml_legend=1 00:06:29.444 --rc geninfo_all_blocks=1 00:06:29.444 --rc geninfo_unexecuted_blocks=1 00:06:29.444 00:06:29.444 ' 00:06:29.444 06:40:22 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:29.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.444 --rc genhtml_branch_coverage=1 00:06:29.444 --rc genhtml_function_coverage=1 00:06:29.444 --rc genhtml_legend=1 00:06:29.444 --rc geninfo_all_blocks=1 00:06:29.444 --rc geninfo_unexecuted_blocks=1 00:06:29.444 00:06:29.444 ' 00:06:29.444 06:40:22 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:29.444 06:40:22 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:29.444 06:40:22 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:29.444 06:40:22 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:29.444 06:40:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.445 06:40:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.445 06:40:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.445 ************************************ 00:06:29.445 START TEST default_locks 00:06:29.445 ************************************ 00:06:29.445 06:40:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:29.445 06:40:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70985 00:06:29.445 06:40:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70985 00:06:29.445 06:40:22 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 70985 ']' 00:06:29.445 06:40:22 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.445 06:40:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.445 06:40:22 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.445 06:40:22 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.445 06:40:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.445 06:40:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.445 [2024-11-18 06:40:22.460000] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:29.445 [2024-11-18 06:40:22.460256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70985 ] 00:06:29.703 [2024-11-18 06:40:22.614486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.703 [2024-11-18 06:40:22.633238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.269 06:40:23 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.269 06:40:23 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:30.269 06:40:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70985 00:06:30.269 06:40:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70985 00:06:30.269 06:40:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.528 06:40:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70985 00:06:30.528 06:40:23 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 70985 ']' 00:06:30.528 06:40:23 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 70985 00:06:30.528 06:40:23 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:30.528 06:40:23 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.528 06:40:23 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70985 00:06:30.528 killing process with pid 70985 00:06:30.528 06:40:23 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.528 06:40:23 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.528 06:40:23 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70985' 00:06:30.528 06:40:23 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 70985 00:06:30.528 06:40:23 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 70985 00:06:30.789 06:40:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70985 00:06:30.789 06:40:23 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:30.789 06:40:23 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 70985 00:06:30.789 06:40:23 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:30.789 06:40:23 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.789 06:40:23 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:30.789 06:40:23 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.789 06:40:23 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 70985 00:06:30.789 06:40:23 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 70985 ']' 00:06:30.789 06:40:23 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.789 06:40:23 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.789 06:40:23 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.789 ERROR: process (pid: 70985) is no longer running 00:06:30.789 06:40:23 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.789 06:40:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.789 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (70985) - No such process 00:06:30.789 06:40:23 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.789 06:40:23 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:30.789 06:40:23 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:30.789 06:40:23 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:30.789 06:40:23 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:30.789 06:40:23 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:30.789 06:40:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:30.789 06:40:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:30.789 06:40:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:30.789 06:40:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:30.789 00:06:30.789 real 0m1.352s 00:06:30.789 user 0m1.376s 00:06:30.789 sys 0m0.408s 00:06:30.789 06:40:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.789 ************************************ 00:06:30.789 06:40:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.789 END TEST default_locks 00:06:30.789 ************************************ 00:06:30.789 06:40:23 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:30.789 06:40:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.789 06:40:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.789 06:40:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.789 ************************************ 00:06:30.789 START TEST default_locks_via_rpc 00:06:30.789 ************************************ 00:06:30.789 06:40:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:30.789 06:40:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71037 00:06:30.789 06:40:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:30.789 06:40:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71037 00:06:30.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.789 06:40:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71037 ']' 00:06:30.789 06:40:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.789 06:40:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.789 06:40:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.789 06:40:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.789 06:40:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.789 [2024-11-18 06:40:23.862307] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:30.789 [2024-11-18 06:40:23.862416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71037 ] 00:06:31.049 [2024-11-18 06:40:24.021491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.049 [2024-11-18 06:40:24.040235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.620 06:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.620 06:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:31.620 06:40:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:31.620 06:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.620 06:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.881 06:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.881 06:40:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:31.881 06:40:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:31.881 06:40:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:31.881 06:40:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:31.881 06:40:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:31.881 06:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.881 06:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.881 06:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.881 06:40:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71037 00:06:31.881 06:40:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:31.881 06:40:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71037 00:06:31.881 06:40:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71037 00:06:31.881 06:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 71037 ']' 00:06:31.881 06:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 71037 00:06:31.881 06:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:31.881 06:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.881 06:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71037 00:06:31.881 killing process with pid 71037 00:06:31.881 06:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:31.881 06:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:31.881 06:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71037' 00:06:31.881 06:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 71037 00:06:31.881 06:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 71037 00:06:32.141 00:06:32.141 real 0m1.413s 00:06:32.141 user 0m1.443s 00:06:32.141 sys 0m0.409s 00:06:32.141 ************************************ 00:06:32.142 END TEST default_locks_via_rpc 00:06:32.142 ************************************ 00:06:32.142 06:40:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.142 06:40:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.403 06:40:25 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:32.403 06:40:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.403 06:40:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.403 06:40:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.403 ************************************ 00:06:32.403 START TEST non_locking_app_on_locked_coremask 00:06:32.403 ************************************ 00:06:32.403 06:40:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:32.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.403 06:40:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71079 00:06:32.403 06:40:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71079 /var/tmp/spdk.sock 00:06:32.403 06:40:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71079 ']' 00:06:32.403 06:40:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.403 06:40:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.403 06:40:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.403 06:40:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.403 06:40:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:32.403 06:40:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.403 [2024-11-18 06:40:25.334062] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:32.403 [2024-11-18 06:40:25.334185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71079 ] 00:06:32.663 [2024-11-18 06:40:25.493909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.663 [2024-11-18 06:40:25.522621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.234 06:40:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.234 06:40:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:33.234 06:40:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71095 00:06:33.234 06:40:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71095 /var/tmp/spdk2.sock 00:06:33.234 06:40:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71095 ']' 00:06:33.234 06:40:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.234 06:40:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.234 06:40:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.234 06:40:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.234 06:40:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:33.234 06:40:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.234 [2024-11-18 06:40:26.266394] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:33.234 [2024-11-18 06:40:26.266545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71095 ] 00:06:33.495 [2024-11-18 06:40:26.444486] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:33.495 [2024-11-18 06:40:26.444558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.495 [2024-11-18 06:40:26.492010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.063 06:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.063 06:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:34.063 06:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71079 00:06:34.063 06:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71079 00:06:34.063 06:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.324 06:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71079 00:06:34.324 06:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71079 ']' 00:06:34.324 06:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71079 00:06:34.324 06:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:34.325 06:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.325 06:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71079 00:06:34.586 killing process with pid 71079 00:06:34.586 06:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.586 06:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.586 06:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71079' 00:06:34.586 06:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71079 00:06:34.586 06:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71079 00:06:34.847 06:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71095 00:06:34.847 06:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71095 ']' 00:06:34.847 06:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71095 00:06:34.847 06:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:34.847 06:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.847 06:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71095 00:06:34.847 killing process with pid 71095 00:06:34.847 06:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.847 06:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.847 06:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71095' 00:06:34.847 06:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71095 00:06:34.847 06:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71095 00:06:35.109 ************************************ 00:06:35.109 END TEST non_locking_app_on_locked_coremask 00:06:35.109 ************************************ 00:06:35.109 00:06:35.109 real 0m2.820s 00:06:35.109 user 0m3.068s 00:06:35.109 sys 0m0.864s 00:06:35.109 06:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.109 06:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.109 06:40:28 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:35.109 06:40:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.109 06:40:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.109 06:40:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.109 ************************************ 00:06:35.109 START TEST locking_app_on_unlocked_coremask 00:06:35.109 ************************************ 00:06:35.109 06:40:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:35.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.109 06:40:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71153 00:06:35.109 06:40:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71153 /var/tmp/spdk.sock 00:06:35.109 06:40:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71153 ']' 00:06:35.109 06:40:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.109 06:40:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:35.109 06:40:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.109 06:40:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.109 06:40:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.109 06:40:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.371 [2024-11-18 06:40:28.207464] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:35.371 [2024-11-18 06:40:28.207730] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71153 ] 00:06:35.371 [2024-11-18 06:40:28.361657] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:35.371 [2024-11-18 06:40:28.361799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.371 [2024-11-18 06:40:28.378749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.312 06:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.312 06:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:36.312 06:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71169 00:06:36.312 06:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71169 /var/tmp/spdk2.sock 00:06:36.312 06:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71169 ']' 00:06:36.312 06:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.312 06:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:36.312 06:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.312 06:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.312 06:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.312 06:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.312 [2024-11-18 06:40:29.106954] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:36.312 [2024-11-18 06:40:29.107248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71169 ] 00:06:36.312 [2024-11-18 06:40:29.269186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.312 [2024-11-18 06:40:29.301531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.884 06:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.884 06:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:36.884 06:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71169 00:06:36.884 06:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71169 00:06:36.884 06:40:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.145 06:40:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71153 00:06:37.145 06:40:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71153 ']' 00:06:37.145 06:40:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 71153 00:06:37.145 06:40:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:37.145 06:40:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.145 06:40:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71153 00:06:37.406 killing process with pid 71153 00:06:37.406 06:40:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.406 06:40:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.406 06:40:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71153' 00:06:37.406 06:40:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 71153 00:06:37.406 06:40:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 71153 00:06:37.667 06:40:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71169 00:06:37.667 06:40:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71169 ']' 00:06:37.667 06:40:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 71169 00:06:37.667 06:40:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:37.667 06:40:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.667 06:40:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71169 00:06:37.667 06:40:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.667 killing process with pid 71169 00:06:37.667 06:40:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.667 06:40:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71169' 00:06:37.667 06:40:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 71169 00:06:37.667 06:40:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 71169 00:06:37.928 ************************************ 00:06:37.928 END TEST locking_app_on_unlocked_coremask 00:06:37.928 ************************************ 00:06:37.928 00:06:37.928 real 0m2.769s 00:06:37.928 user 0m3.108s 00:06:37.928 sys 0m0.726s 00:06:37.928 06:40:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.928 06:40:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.928 06:40:30 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:37.928 06:40:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.928 06:40:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.928 06:40:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.928 ************************************ 00:06:37.928 START TEST locking_app_on_locked_coremask 00:06:37.928 ************************************ 00:06:37.928 06:40:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:37.928 06:40:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71227 00:06:37.928 06:40:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71227 /var/tmp/spdk.sock 00:06:37.928 06:40:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71227 ']' 00:06:37.928 06:40:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.928 06:40:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.928 06:40:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.928 06:40:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.928 06:40:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.928 06:40:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.189 [2024-11-18 06:40:31.032484] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:38.189 [2024-11-18 06:40:31.032600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71227 ] 00:06:38.189 [2024-11-18 06:40:31.176015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.189 [2024-11-18 06:40:31.192115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.823 06:40:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.823 06:40:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:38.824 06:40:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71232 00:06:38.824 06:40:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71232 /var/tmp/spdk2.sock 00:06:38.824 06:40:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:38.824 06:40:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71232 /var/tmp/spdk2.sock 00:06:38.824 06:40:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:38.824 06:40:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:38.824 06:40:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.824 06:40:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:38.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.824 06:40:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.824 06:40:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71232 /var/tmp/spdk2.sock 00:06:38.824 06:40:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71232 ']' 00:06:38.824 06:40:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.824 06:40:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.824 06:40:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.824 06:40:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.824 06:40:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.116 [2024-11-18 06:40:31.890646] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:39.116 [2024-11-18 06:40:31.892436] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71232 ] 00:06:39.116 [2024-11-18 06:40:32.062146] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71227 has claimed it. 00:06:39.116 [2024-11-18 06:40:32.062188] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:39.687 ERROR: process (pid: 71232) is no longer running 00:06:39.687 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71232) - No such process 00:06:39.687 06:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.687 06:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:39.687 06:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:39.687 06:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:39.687 06:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:39.687 06:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:39.687 06:40:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71227 00:06:39.687 06:40:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71227 00:06:39.687 06:40:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.687 06:40:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71227 00:06:39.687 06:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71227 ']' 00:06:39.687 06:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71227 00:06:39.687 06:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:39.947 06:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.947 06:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71227 00:06:39.947 killing process with pid 71227 00:06:39.947 06:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.947 06:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.947 06:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71227' 00:06:39.947 06:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71227 00:06:39.947 06:40:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71227 00:06:39.947 ************************************ 00:06:39.947 END TEST locking_app_on_locked_coremask 00:06:39.947 ************************************ 00:06:39.947 00:06:39.947 real 0m2.048s 00:06:39.947 user 0m2.300s 00:06:39.947 sys 0m0.487s 00:06:39.947 06:40:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.947 06:40:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.208 06:40:33 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:40.208 06:40:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.208 06:40:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.208 06:40:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.208 ************************************ 00:06:40.208 START TEST locking_overlapped_coremask 00:06:40.208 ************************************ 00:06:40.208 06:40:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:40.208 06:40:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71285 00:06:40.208 06:40:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71285 /var/tmp/spdk.sock 00:06:40.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.208 06:40:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71285 ']' 00:06:40.208 06:40:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.208 06:40:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.208 06:40:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:40.208 06:40:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.208 06:40:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.208 06:40:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.208 [2024-11-18 06:40:33.133646] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:40.208 [2024-11-18 06:40:33.133762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71285 ] 00:06:40.469 [2024-11-18 06:40:33.294847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.469 [2024-11-18 06:40:33.326463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.469 [2024-11-18 06:40:33.326775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.469 [2024-11-18 06:40:33.326967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.041 06:40:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.041 06:40:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:41.041 06:40:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71297 00:06:41.041 06:40:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:41.041 06:40:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71297 /var/tmp/spdk2.sock 00:06:41.041 06:40:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:41.041 06:40:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71297 /var/tmp/spdk2.sock 00:06:41.041 06:40:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:41.041 06:40:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.041 06:40:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:41.041 06:40:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.041 06:40:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71297 /var/tmp/spdk2.sock 00:06:41.041 06:40:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71297 ']' 00:06:41.041 06:40:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.041 06:40:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.041 06:40:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.041 06:40:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.041 06:40:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.041 [2024-11-18 06:40:34.054766] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:41.041 [2024-11-18 06:40:34.055144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71297 ] 00:06:41.302 [2024-11-18 06:40:34.230748] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71285 has claimed it. 00:06:41.302 [2024-11-18 06:40:34.230823] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:41.873 ERROR: process (pid: 71297) is no longer running 00:06:41.873 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71297) - No such process 00:06:41.873 06:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.873 06:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:41.873 06:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:41.873 06:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:41.873 06:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:41.873 06:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:41.873 06:40:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:41.873 06:40:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:41.873 06:40:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:41.873 06:40:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:41.873 06:40:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71285 00:06:41.873 06:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 71285 ']' 00:06:41.873 06:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 71285 00:06:41.873 06:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:41.873 06:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.873 06:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71285 00:06:41.873 06:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.873 06:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.873 killing process with pid 71285 00:06:41.873 06:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71285' 00:06:41.873 06:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 71285 00:06:41.873 06:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 71285 00:06:42.133 00:06:42.133 real 0m1.920s 00:06:42.133 user 0m5.197s 00:06:42.133 sys 0m0.505s 00:06:42.133 06:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.133 ************************************ 00:06:42.133 END TEST locking_overlapped_coremask 00:06:42.133 ************************************ 00:06:42.133 06:40:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.133 06:40:35 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:42.133 06:40:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.133 06:40:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.133 06:40:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.133 ************************************ 00:06:42.133 START TEST locking_overlapped_coremask_via_rpc 00:06:42.133 ************************************ 00:06:42.133 06:40:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:42.133 06:40:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71345 00:06:42.133 06:40:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71345 /var/tmp/spdk.sock 00:06:42.133 06:40:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71345 ']' 00:06:42.133 06:40:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.133 06:40:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.133 06:40:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.133 06:40:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.133 06:40:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:42.133 06:40:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.133 [2024-11-18 06:40:35.108250] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:42.133 [2024-11-18 06:40:35.108380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71345 ] 00:06:42.391 [2024-11-18 06:40:35.270234] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:42.391 [2024-11-18 06:40:35.270284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:42.391 [2024-11-18 06:40:35.297119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.391 [2024-11-18 06:40:35.297362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.391 [2024-11-18 06:40:35.297371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.957 06:40:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.957 06:40:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:42.957 06:40:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71352 00:06:42.957 06:40:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:42.957 06:40:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71352 /var/tmp/spdk2.sock 00:06:42.957 06:40:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71352 ']' 00:06:42.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.957 06:40:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.957 06:40:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.957 06:40:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.957 06:40:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.957 06:40:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.957 [2024-11-18 06:40:36.019262] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:42.958 [2024-11-18 06:40:36.019493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71352 ] 00:06:43.215 [2024-11-18 06:40:36.191373] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.215 [2024-11-18 06:40:36.191426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.215 [2024-11-18 06:40:36.231292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:43.216 [2024-11-18 06:40:36.231423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.216 [2024-11-18 06:40:36.231489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:43.785 06:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.785 06:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:43.785 06:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:43.785 06:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.785 06:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.785 06:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.785 06:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:43.785 06:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:43.785 06:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:43.785 06:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:43.785 06:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.785 06:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:43.785 06:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.786 06:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:43.786 06:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.786 06:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.786 [2024-11-18 06:40:36.864133] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71345 has claimed it. 00:06:44.046 request: 00:06:44.046 { 00:06:44.046 "method": "framework_enable_cpumask_locks", 00:06:44.046 "req_id": 1 00:06:44.046 } 00:06:44.046 Got JSON-RPC error response 00:06:44.046 response: 00:06:44.046 { 00:06:44.046 "code": -32603, 00:06:44.046 "message": "Failed to claim CPU core: 2" 00:06:44.046 } 00:06:44.046 06:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:44.046 06:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:44.046 06:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:44.046 06:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:44.046 06:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:44.046 06:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71345 /var/tmp/spdk.sock 00:06:44.046 06:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71345 ']' 00:06:44.046 06:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.046 06:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.046 06:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.046 06:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.046 06:40:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.046 06:40:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.046 06:40:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:44.046 06:40:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71352 /var/tmp/spdk2.sock 00:06:44.046 06:40:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71352 ']' 00:06:44.046 06:40:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.046 06:40:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.046 06:40:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.046 06:40:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.046 06:40:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.307 06:40:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.307 06:40:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:44.307 06:40:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:44.307 06:40:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:44.307 06:40:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:44.307 06:40:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:44.307 00:06:44.307 real 0m2.245s 00:06:44.307 user 0m1.048s 00:06:44.307 sys 0m0.122s 00:06:44.307 06:40:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.307 06:40:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.307 ************************************ 00:06:44.307 END TEST locking_overlapped_coremask_via_rpc 00:06:44.307 ************************************ 00:06:44.307 06:40:37 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:44.307 06:40:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71345 ]] 00:06:44.307 06:40:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71345 00:06:44.307 06:40:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71345 ']' 00:06:44.307 06:40:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71345 00:06:44.307 06:40:37 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:44.307 06:40:37 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.307 06:40:37 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71345 00:06:44.307 killing process with pid 71345 00:06:44.307 06:40:37 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:44.307 06:40:37 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:44.307 06:40:37 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71345' 00:06:44.307 06:40:37 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71345 00:06:44.307 06:40:37 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71345 00:06:44.565 06:40:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71352 ]] 00:06:44.565 06:40:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71352 00:06:44.565 06:40:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71352 ']' 00:06:44.565 06:40:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71352 00:06:44.565 06:40:37 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:44.565 06:40:37 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.565 06:40:37 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71352 00:06:44.565 killing process with pid 71352 00:06:44.565 06:40:37 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:44.565 06:40:37 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:44.565 06:40:37 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71352' 00:06:44.565 06:40:37 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71352 00:06:44.565 06:40:37 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71352 00:06:44.826 06:40:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:44.826 06:40:37 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:44.826 Process with pid 71345 is not found 00:06:44.826 Process with pid 71352 is not found 00:06:44.826 06:40:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71345 ]] 00:06:44.826 06:40:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71345 00:06:44.826 06:40:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71345 ']' 00:06:44.826 06:40:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71345 00:06:44.826 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71345) - No such process 00:06:44.826 06:40:37 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71345 is not found' 00:06:44.826 06:40:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71352 ]] 00:06:44.826 06:40:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71352 00:06:44.826 06:40:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71352 ']' 00:06:44.826 06:40:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71352 00:06:44.826 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71352) - No such process 00:06:44.826 06:40:37 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71352 is not found' 00:06:44.826 06:40:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:44.826 ************************************ 00:06:44.826 END TEST cpu_locks 00:06:44.826 ************************************ 00:06:44.826 00:06:44.826 real 0m15.588s 00:06:44.826 user 0m27.533s 00:06:44.826 sys 0m4.255s 00:06:44.826 06:40:37 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.826 06:40:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.826 ************************************ 00:06:44.826 END TEST event 00:06:44.826 ************************************ 00:06:44.826 00:06:44.826 real 0m41.170s 00:06:44.826 user 1m19.734s 00:06:44.826 sys 0m7.116s 00:06:44.826 06:40:37 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.826 06:40:37 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.826 06:40:37 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:44.826 06:40:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.826 06:40:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.826 06:40:37 -- common/autotest_common.sh@10 -- # set +x 00:06:45.086 ************************************ 00:06:45.086 START TEST thread 00:06:45.086 ************************************ 00:06:45.086 06:40:37 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:45.086 * Looking for test storage... 00:06:45.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:45.087 06:40:37 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:45.087 06:40:37 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:45.087 06:40:37 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:45.087 06:40:38 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:45.087 06:40:38 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.087 06:40:38 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.087 06:40:38 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.087 06:40:38 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.087 06:40:38 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.087 06:40:38 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.087 06:40:38 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.087 06:40:38 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.087 06:40:38 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.087 06:40:38 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.087 06:40:38 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.087 06:40:38 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:45.087 06:40:38 thread -- scripts/common.sh@345 -- # : 1 00:06:45.087 06:40:38 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.087 06:40:38 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.087 06:40:38 thread -- scripts/common.sh@365 -- # decimal 1 00:06:45.087 06:40:38 thread -- scripts/common.sh@353 -- # local d=1 00:06:45.087 06:40:38 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.087 06:40:38 thread -- scripts/common.sh@355 -- # echo 1 00:06:45.087 06:40:38 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.087 06:40:38 thread -- scripts/common.sh@366 -- # decimal 2 00:06:45.087 06:40:38 thread -- scripts/common.sh@353 -- # local d=2 00:06:45.087 06:40:38 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.087 06:40:38 thread -- scripts/common.sh@355 -- # echo 2 00:06:45.087 06:40:38 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.087 06:40:38 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.087 06:40:38 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.087 06:40:38 thread -- scripts/common.sh@368 -- # return 0 00:06:45.087 06:40:38 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.087 06:40:38 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:45.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.087 --rc genhtml_branch_coverage=1 00:06:45.087 --rc genhtml_function_coverage=1 00:06:45.087 --rc genhtml_legend=1 00:06:45.087 --rc geninfo_all_blocks=1 00:06:45.087 --rc geninfo_unexecuted_blocks=1 00:06:45.087 00:06:45.087 ' 00:06:45.087 06:40:38 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:45.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.087 --rc genhtml_branch_coverage=1 00:06:45.087 --rc genhtml_function_coverage=1 00:06:45.087 --rc genhtml_legend=1 00:06:45.087 --rc geninfo_all_blocks=1 00:06:45.087 --rc geninfo_unexecuted_blocks=1 00:06:45.087 00:06:45.087 ' 00:06:45.087 06:40:38 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:45.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.087 --rc genhtml_branch_coverage=1 00:06:45.087 --rc genhtml_function_coverage=1 00:06:45.087 --rc genhtml_legend=1 00:06:45.087 --rc geninfo_all_blocks=1 00:06:45.087 --rc geninfo_unexecuted_blocks=1 00:06:45.087 00:06:45.087 ' 00:06:45.087 06:40:38 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:45.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.087 --rc genhtml_branch_coverage=1 00:06:45.087 --rc genhtml_function_coverage=1 00:06:45.087 --rc genhtml_legend=1 00:06:45.087 --rc geninfo_all_blocks=1 00:06:45.087 --rc geninfo_unexecuted_blocks=1 00:06:45.087 00:06:45.087 ' 00:06:45.087 06:40:38 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:45.087 06:40:38 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:45.087 06:40:38 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.087 06:40:38 thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.087 ************************************ 00:06:45.087 START TEST thread_poller_perf 00:06:45.087 ************************************ 00:06:45.087 06:40:38 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:45.087 [2024-11-18 06:40:38.108507] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:45.087 [2024-11-18 06:40:38.108639] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71489 ] 00:06:45.348 [2024-11-18 06:40:38.269817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.348 [2024-11-18 06:40:38.298283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.348 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:46.291 [2024-11-18T06:40:39.378Z] ====================================== 00:06:46.291 [2024-11-18T06:40:39.378Z] busy:2613566010 (cyc) 00:06:46.291 [2024-11-18T06:40:39.378Z] total_run_count: 305000 00:06:46.291 [2024-11-18T06:40:39.378Z] tsc_hz: 2600000000 (cyc) 00:06:46.291 [2024-11-18T06:40:39.378Z] ====================================== 00:06:46.291 [2024-11-18T06:40:39.378Z] poller_cost: 8569 (cyc), 3295 (nsec) 00:06:46.291 00:06:46.291 real 0m1.283s 00:06:46.291 user 0m1.104s 00:06:46.291 sys 0m0.070s 00:06:46.291 06:40:39 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.291 ************************************ 00:06:46.291 END TEST thread_poller_perf 00:06:46.291 ************************************ 00:06:46.291 06:40:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:46.553 06:40:39 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:46.553 06:40:39 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:46.553 06:40:39 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.553 06:40:39 thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.553 ************************************ 00:06:46.553 START TEST thread_poller_perf 00:06:46.553 ************************************ 00:06:46.553 06:40:39 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:46.553 [2024-11-18 06:40:39.460307] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:46.553 [2024-11-18 06:40:39.460435] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71521 ] 00:06:46.553 [2024-11-18 06:40:39.621426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.814 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:46.814 [2024-11-18 06:40:39.648741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.756 [2024-11-18T06:40:40.843Z] ====================================== 00:06:47.756 [2024-11-18T06:40:40.843Z] busy:2603940998 (cyc) 00:06:47.756 [2024-11-18T06:40:40.843Z] total_run_count: 3966000 00:06:47.756 [2024-11-18T06:40:40.843Z] tsc_hz: 2600000000 (cyc) 00:06:47.756 [2024-11-18T06:40:40.843Z] ====================================== 00:06:47.756 [2024-11-18T06:40:40.843Z] poller_cost: 656 (cyc), 252 (nsec) 00:06:47.756 00:06:47.756 real 0m1.277s 00:06:47.756 user 0m1.101s 00:06:47.756 sys 0m0.068s 00:06:47.756 06:40:40 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.756 ************************************ 00:06:47.756 END TEST thread_poller_perf 00:06:47.756 ************************************ 00:06:47.756 06:40:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:47.756 06:40:40 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:47.756 ************************************ 00:06:47.756 END TEST thread 00:06:47.756 ************************************ 00:06:47.756 00:06:47.756 real 0m2.839s 00:06:47.756 user 0m2.321s 00:06:47.756 sys 0m0.266s 00:06:47.756 06:40:40 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.756 06:40:40 thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.756 06:40:40 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:47.756 06:40:40 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:47.756 06:40:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.756 06:40:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.756 06:40:40 -- common/autotest_common.sh@10 -- # set +x 00:06:47.756 ************************************ 00:06:47.756 START TEST app_cmdline 00:06:47.756 ************************************ 00:06:47.756 06:40:40 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:48.017 * Looking for test storage... 00:06:48.017 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:48.017 06:40:40 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:48.017 06:40:40 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:48.017 06:40:40 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:48.017 06:40:40 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:48.017 06:40:40 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.017 06:40:40 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.017 06:40:40 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.017 06:40:40 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.017 06:40:40 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.017 06:40:40 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.017 06:40:40 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.017 06:40:40 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.017 06:40:40 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.017 06:40:40 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.017 06:40:40 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.017 06:40:40 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:48.017 06:40:40 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:48.017 06:40:40 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.017 06:40:40 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.017 06:40:40 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:48.017 06:40:40 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:48.017 06:40:40 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.017 06:40:40 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:48.017 06:40:40 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.017 06:40:40 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:48.017 06:40:40 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:48.017 06:40:40 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.017 06:40:40 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:48.017 06:40:40 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.017 06:40:40 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.017 06:40:40 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.017 06:40:40 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:48.017 06:40:40 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.017 06:40:40 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:48.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.017 --rc genhtml_branch_coverage=1 00:06:48.017 --rc genhtml_function_coverage=1 00:06:48.017 --rc genhtml_legend=1 00:06:48.017 --rc geninfo_all_blocks=1 00:06:48.017 --rc geninfo_unexecuted_blocks=1 00:06:48.017 00:06:48.017 ' 00:06:48.017 06:40:40 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:48.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.017 --rc genhtml_branch_coverage=1 00:06:48.017 --rc genhtml_function_coverage=1 00:06:48.017 --rc genhtml_legend=1 00:06:48.017 --rc geninfo_all_blocks=1 00:06:48.017 --rc geninfo_unexecuted_blocks=1 00:06:48.017 00:06:48.017 ' 00:06:48.017 06:40:40 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:48.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.017 --rc genhtml_branch_coverage=1 00:06:48.017 --rc genhtml_function_coverage=1 00:06:48.017 --rc genhtml_legend=1 00:06:48.017 --rc geninfo_all_blocks=1 00:06:48.017 --rc geninfo_unexecuted_blocks=1 00:06:48.017 00:06:48.017 ' 00:06:48.017 06:40:40 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:48.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.017 --rc genhtml_branch_coverage=1 00:06:48.017 --rc genhtml_function_coverage=1 00:06:48.017 --rc genhtml_legend=1 00:06:48.017 --rc geninfo_all_blocks=1 00:06:48.017 --rc geninfo_unexecuted_blocks=1 00:06:48.017 00:06:48.017 ' 00:06:48.017 06:40:40 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:48.017 06:40:40 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71599 00:06:48.017 06:40:40 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71599 00:06:48.017 06:40:40 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 71599 ']' 00:06:48.017 06:40:40 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.017 06:40:40 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.017 06:40:40 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.017 06:40:40 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.017 06:40:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:48.017 06:40:40 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:48.017 [2024-11-18 06:40:41.049343] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:48.017 [2024-11-18 06:40:41.049501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71599 ] 00:06:48.278 [2024-11-18 06:40:41.210148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.278 [2024-11-18 06:40:41.238509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.852 06:40:41 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.852 06:40:41 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:48.852 06:40:41 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:49.113 { 00:06:49.113 "version": "SPDK v25.01-pre git sha1 83e8405e4", 00:06:49.113 "fields": { 00:06:49.113 "major": 25, 00:06:49.113 "minor": 1, 00:06:49.114 "patch": 0, 00:06:49.114 "suffix": "-pre", 00:06:49.114 "commit": "83e8405e4" 00:06:49.114 } 00:06:49.114 } 00:06:49.114 06:40:42 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:49.114 06:40:42 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:49.114 06:40:42 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:49.114 06:40:42 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:49.114 06:40:42 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:49.114 06:40:42 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:49.114 06:40:42 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:49.114 06:40:42 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.114 06:40:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:49.114 06:40:42 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.114 06:40:42 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:49.114 06:40:42 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:49.114 06:40:42 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:49.114 06:40:42 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:49.114 06:40:42 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:49.114 06:40:42 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:49.114 06:40:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.114 06:40:42 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:49.114 06:40:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.114 06:40:42 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:49.114 06:40:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.114 06:40:42 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:49.114 06:40:42 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:49.114 06:40:42 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:49.374 request: 00:06:49.375 { 00:06:49.375 "method": "env_dpdk_get_mem_stats", 00:06:49.375 "req_id": 1 00:06:49.375 } 00:06:49.375 Got JSON-RPC error response 00:06:49.375 response: 00:06:49.375 { 00:06:49.375 "code": -32601, 00:06:49.375 "message": "Method not found" 00:06:49.375 } 00:06:49.375 06:40:42 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:49.375 06:40:42 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:49.375 06:40:42 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:49.375 06:40:42 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:49.375 06:40:42 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71599 00:06:49.375 06:40:42 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 71599 ']' 00:06:49.375 06:40:42 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 71599 00:06:49.375 06:40:42 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:49.375 06:40:42 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.375 06:40:42 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71599 00:06:49.375 06:40:42 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.375 06:40:42 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.375 06:40:42 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71599' 00:06:49.375 killing process with pid 71599 00:06:49.375 06:40:42 app_cmdline -- common/autotest_common.sh@973 -- # kill 71599 00:06:49.375 06:40:42 app_cmdline -- common/autotest_common.sh@978 -- # wait 71599 00:06:49.635 00:06:49.636 real 0m1.894s 00:06:49.636 user 0m2.174s 00:06:49.636 sys 0m0.505s 00:06:49.636 06:40:42 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.636 ************************************ 00:06:49.636 END TEST app_cmdline 00:06:49.636 06:40:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:49.636 ************************************ 00:06:49.896 06:40:42 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:49.896 06:40:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.897 06:40:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.897 06:40:42 -- common/autotest_common.sh@10 -- # set +x 00:06:49.897 ************************************ 00:06:49.897 START TEST version 00:06:49.897 ************************************ 00:06:49.897 06:40:42 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:49.897 * Looking for test storage... 00:06:49.897 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:49.897 06:40:42 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:49.897 06:40:42 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:49.897 06:40:42 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:49.897 06:40:42 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:49.897 06:40:42 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.897 06:40:42 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.897 06:40:42 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.897 06:40:42 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.897 06:40:42 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.897 06:40:42 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.897 06:40:42 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.897 06:40:42 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.897 06:40:42 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.897 06:40:42 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.897 06:40:42 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.897 06:40:42 version -- scripts/common.sh@344 -- # case "$op" in 00:06:49.897 06:40:42 version -- scripts/common.sh@345 -- # : 1 00:06:49.897 06:40:42 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.897 06:40:42 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.897 06:40:42 version -- scripts/common.sh@365 -- # decimal 1 00:06:49.897 06:40:42 version -- scripts/common.sh@353 -- # local d=1 00:06:49.897 06:40:42 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.897 06:40:42 version -- scripts/common.sh@355 -- # echo 1 00:06:49.897 06:40:42 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.897 06:40:42 version -- scripts/common.sh@366 -- # decimal 2 00:06:49.897 06:40:42 version -- scripts/common.sh@353 -- # local d=2 00:06:49.897 06:40:42 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.897 06:40:42 version -- scripts/common.sh@355 -- # echo 2 00:06:49.897 06:40:42 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.897 06:40:42 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.897 06:40:42 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.897 06:40:42 version -- scripts/common.sh@368 -- # return 0 00:06:49.897 06:40:42 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.897 06:40:42 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:49.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.897 --rc genhtml_branch_coverage=1 00:06:49.897 --rc genhtml_function_coverage=1 00:06:49.897 --rc genhtml_legend=1 00:06:49.897 --rc geninfo_all_blocks=1 00:06:49.897 --rc geninfo_unexecuted_blocks=1 00:06:49.897 00:06:49.897 ' 00:06:49.897 06:40:42 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:49.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.897 --rc genhtml_branch_coverage=1 00:06:49.897 --rc genhtml_function_coverage=1 00:06:49.897 --rc genhtml_legend=1 00:06:49.897 --rc geninfo_all_blocks=1 00:06:49.897 --rc geninfo_unexecuted_blocks=1 00:06:49.897 00:06:49.897 ' 00:06:49.897 06:40:42 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:49.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.897 --rc genhtml_branch_coverage=1 00:06:49.897 --rc genhtml_function_coverage=1 00:06:49.897 --rc genhtml_legend=1 00:06:49.897 --rc geninfo_all_blocks=1 00:06:49.897 --rc geninfo_unexecuted_blocks=1 00:06:49.897 00:06:49.897 ' 00:06:49.897 06:40:42 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:49.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.897 --rc genhtml_branch_coverage=1 00:06:49.897 --rc genhtml_function_coverage=1 00:06:49.897 --rc genhtml_legend=1 00:06:49.897 --rc geninfo_all_blocks=1 00:06:49.897 --rc geninfo_unexecuted_blocks=1 00:06:49.897 00:06:49.897 ' 00:06:49.897 06:40:42 version -- app/version.sh@17 -- # get_header_version major 00:06:49.897 06:40:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:49.897 06:40:42 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.897 06:40:42 version -- app/version.sh@14 -- # cut -f2 00:06:49.897 06:40:42 version -- app/version.sh@17 -- # major=25 00:06:49.897 06:40:42 version -- app/version.sh@18 -- # get_header_version minor 00:06:49.897 06:40:42 version -- app/version.sh@14 -- # cut -f2 00:06:49.897 06:40:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:49.897 06:40:42 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.897 06:40:42 version -- app/version.sh@18 -- # minor=1 00:06:49.897 06:40:42 version -- app/version.sh@19 -- # get_header_version patch 00:06:49.897 06:40:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:49.897 06:40:42 version -- app/version.sh@14 -- # cut -f2 00:06:49.897 06:40:42 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.897 06:40:42 version -- app/version.sh@19 -- # patch=0 00:06:49.897 06:40:42 version -- app/version.sh@20 -- # get_header_version suffix 00:06:49.897 06:40:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:49.897 06:40:42 version -- app/version.sh@14 -- # cut -f2 00:06:49.897 06:40:42 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.897 06:40:42 version -- app/version.sh@20 -- # suffix=-pre 00:06:49.897 06:40:42 version -- app/version.sh@22 -- # version=25.1 00:06:49.897 06:40:42 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:49.897 06:40:42 version -- app/version.sh@28 -- # version=25.1rc0 00:06:49.897 06:40:42 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:49.897 06:40:42 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:49.897 06:40:42 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:49.897 06:40:42 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:49.897 00:06:49.897 real 0m0.204s 00:06:49.897 user 0m0.134s 00:06:49.897 sys 0m0.099s 00:06:49.897 ************************************ 00:06:49.897 END TEST version 00:06:49.897 ************************************ 00:06:49.897 06:40:42 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.897 06:40:42 version -- common/autotest_common.sh@10 -- # set +x 00:06:50.158 06:40:43 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:50.158 06:40:43 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:50.158 06:40:43 -- spdk/autotest.sh@194 -- # uname -s 00:06:50.158 06:40:43 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:50.158 06:40:43 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:50.158 06:40:43 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:50.158 06:40:43 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:50.158 06:40:43 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:50.158 06:40:43 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:50.158 06:40:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.158 06:40:43 -- common/autotest_common.sh@10 -- # set +x 00:06:50.158 ************************************ 00:06:50.158 START TEST blockdev_nvme 00:06:50.158 ************************************ 00:06:50.158 06:40:43 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:50.158 * Looking for test storage... 00:06:50.158 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:50.158 06:40:43 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:50.158 06:40:43 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:06:50.158 06:40:43 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:50.158 06:40:43 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:50.158 06:40:43 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.158 06:40:43 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.158 06:40:43 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.158 06:40:43 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.158 06:40:43 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.158 06:40:43 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.158 06:40:43 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.158 06:40:43 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.158 06:40:43 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.158 06:40:43 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.158 06:40:43 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.158 06:40:43 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:50.158 06:40:43 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:50.158 06:40:43 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.158 06:40:43 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.158 06:40:43 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:50.158 06:40:43 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:50.158 06:40:43 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.158 06:40:43 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:50.158 06:40:43 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.158 06:40:43 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:50.158 06:40:43 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:50.158 06:40:43 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.158 06:40:43 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:50.158 06:40:43 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.158 06:40:43 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.158 06:40:43 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.158 06:40:43 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:50.158 06:40:43 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.158 06:40:43 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:50.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.158 --rc genhtml_branch_coverage=1 00:06:50.158 --rc genhtml_function_coverage=1 00:06:50.158 --rc genhtml_legend=1 00:06:50.158 --rc geninfo_all_blocks=1 00:06:50.158 --rc geninfo_unexecuted_blocks=1 00:06:50.158 00:06:50.158 ' 00:06:50.158 06:40:43 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:50.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.158 --rc genhtml_branch_coverage=1 00:06:50.158 --rc genhtml_function_coverage=1 00:06:50.158 --rc genhtml_legend=1 00:06:50.158 --rc geninfo_all_blocks=1 00:06:50.158 --rc geninfo_unexecuted_blocks=1 00:06:50.158 00:06:50.158 ' 00:06:50.158 06:40:43 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:50.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.158 --rc genhtml_branch_coverage=1 00:06:50.158 --rc genhtml_function_coverage=1 00:06:50.158 --rc genhtml_legend=1 00:06:50.158 --rc geninfo_all_blocks=1 00:06:50.158 --rc geninfo_unexecuted_blocks=1 00:06:50.158 00:06:50.158 ' 00:06:50.158 06:40:43 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:50.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.158 --rc genhtml_branch_coverage=1 00:06:50.158 --rc genhtml_function_coverage=1 00:06:50.158 --rc genhtml_legend=1 00:06:50.158 --rc geninfo_all_blocks=1 00:06:50.158 --rc geninfo_unexecuted_blocks=1 00:06:50.158 00:06:50.158 ' 00:06:50.158 06:40:43 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:50.158 06:40:43 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:50.158 06:40:43 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:50.158 06:40:43 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:50.158 06:40:43 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:50.158 06:40:43 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:50.158 06:40:43 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:50.158 06:40:43 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:50.158 06:40:43 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:50.158 06:40:43 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:50.158 06:40:43 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:50.158 06:40:43 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:50.158 06:40:43 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:50.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.159 06:40:43 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:50.159 06:40:43 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:50.159 06:40:43 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:50.159 06:40:43 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:50.159 06:40:43 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:50.159 06:40:43 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:50.159 06:40:43 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:50.159 06:40:43 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:50.159 06:40:43 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:50.159 06:40:43 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:50.159 06:40:43 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:50.159 06:40:43 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71766 00:06:50.159 06:40:43 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:50.159 06:40:43 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 71766 00:06:50.159 06:40:43 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 71766 ']' 00:06:50.159 06:40:43 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.159 06:40:43 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.159 06:40:43 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.159 06:40:43 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.159 06:40:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:50.159 06:40:43 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:50.419 [2024-11-18 06:40:43.289043] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:50.419 [2024-11-18 06:40:43.289182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71766 ] 00:06:50.419 [2024-11-18 06:40:43.450735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.419 [2024-11-18 06:40:43.480104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.361 06:40:44 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.361 06:40:44 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:51.361 06:40:44 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:51.361 06:40:44 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:51.361 06:40:44 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:51.361 06:40:44 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:51.361 06:40:44 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:51.361 06:40:44 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:51.361 06:40:44 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.361 06:40:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.640 06:40:44 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.640 06:40:44 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:51.640 06:40:44 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.640 06:40:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.640 06:40:44 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.640 06:40:44 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:51.640 06:40:44 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:51.640 06:40:44 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.640 06:40:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.640 06:40:44 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.640 06:40:44 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:51.640 06:40:44 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.640 06:40:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.640 06:40:44 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.640 06:40:44 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:51.641 06:40:44 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.641 06:40:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.641 06:40:44 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.641 06:40:44 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:51.641 06:40:44 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:51.641 06:40:44 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:51.641 06:40:44 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.641 06:40:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.641 06:40:44 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.641 06:40:44 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:51.641 06:40:44 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:51.641 06:40:44 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "ea62888f-4cde-46c4-9feb-b7759f19950f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ea62888f-4cde-46c4-9feb-b7759f19950f",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "3271ddea-29a8-4926-95e9-93da347a5d82"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "3271ddea-29a8-4926-95e9-93da347a5d82",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "efed2a7f-4574-4f98-8011-2f8c560d5fc7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "efed2a7f-4574-4f98-8011-2f8c560d5fc7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "2e0ab152-bfcd-486c-9019-cb86aa47f730"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2e0ab152-bfcd-486c-9019-cb86aa47f730",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "6ea2e444-28fa-48fb-ab69-a61d382f420d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6ea2e444-28fa-48fb-ab69-a61d382f420d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "5152e1b9-a213-4539-9985-005f6eb7decd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "5152e1b9-a213-4539-9985-005f6eb7decd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:51.641 06:40:44 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:51.641 06:40:44 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:51.641 06:40:44 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:51.641 06:40:44 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 71766 00:06:51.641 06:40:44 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 71766 ']' 00:06:51.641 06:40:44 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 71766 00:06:51.641 06:40:44 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:51.641 06:40:44 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.642 06:40:44 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71766 00:06:51.642 killing process with pid 71766 00:06:51.642 06:40:44 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.642 06:40:44 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.642 06:40:44 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71766' 00:06:51.642 06:40:44 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 71766 00:06:51.642 06:40:44 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 71766 00:06:51.902 06:40:44 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:51.902 06:40:44 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:51.902 06:40:44 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:51.902 06:40:44 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.902 06:40:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.902 ************************************ 00:06:51.902 START TEST bdev_hello_world 00:06:51.902 ************************************ 00:06:51.902 06:40:44 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:52.163 [2024-11-18 06:40:45.017402] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:52.163 [2024-11-18 06:40:45.017540] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71833 ] 00:06:52.163 [2024-11-18 06:40:45.175401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.163 [2024-11-18 06:40:45.202639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.734 [2024-11-18 06:40:45.603265] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:52.734 [2024-11-18 06:40:45.603325] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:52.734 [2024-11-18 06:40:45.603347] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:52.734 [2024-11-18 06:40:45.605676] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:52.734 [2024-11-18 06:40:45.606494] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:52.734 [2024-11-18 06:40:45.606530] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:52.734 [2024-11-18 06:40:45.607139] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:52.734 00:06:52.734 [2024-11-18 06:40:45.607172] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:52.734 00:06:52.734 real 0m0.842s 00:06:52.734 user 0m0.538s 00:06:52.734 sys 0m0.198s 00:06:52.734 06:40:45 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.734 06:40:45 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:52.734 ************************************ 00:06:52.734 END TEST bdev_hello_world 00:06:52.734 ************************************ 00:06:52.995 06:40:45 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:52.996 06:40:45 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:52.996 06:40:45 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.996 06:40:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:52.996 ************************************ 00:06:52.996 START TEST bdev_bounds 00:06:52.996 ************************************ 00:06:52.996 06:40:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:52.996 06:40:45 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71864 00:06:52.996 06:40:45 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:52.996 Process bdevio pid: 71864 00:06:52.996 06:40:45 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:52.996 06:40:45 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71864' 00:06:52.996 06:40:45 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71864 00:06:52.996 06:40:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 71864 ']' 00:06:52.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.996 06:40:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.996 06:40:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.996 06:40:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.996 06:40:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.996 06:40:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:52.996 [2024-11-18 06:40:45.932161] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:52.996 [2024-11-18 06:40:45.932473] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71864 ] 00:06:53.256 [2024-11-18 06:40:46.091278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.256 [2024-11-18 06:40:46.122465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.256 [2024-11-18 06:40:46.122831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.256 [2024-11-18 06:40:46.122775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.826 06:40:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.826 06:40:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:53.826 06:40:46 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:53.826 I/O targets: 00:06:53.826 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:53.826 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:53.826 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:53.826 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:53.826 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:53.826 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:53.826 00:06:53.826 00:06:53.826 CUnit - A unit testing framework for C - Version 2.1-3 00:06:53.826 http://cunit.sourceforge.net/ 00:06:53.826 00:06:53.826 00:06:53.826 Suite: bdevio tests on: Nvme3n1 00:06:53.826 Test: blockdev write read block ...passed 00:06:53.826 Test: blockdev write zeroes read block ...passed 00:06:53.826 Test: blockdev write zeroes read no split ...passed 00:06:53.826 Test: blockdev write zeroes read split ...passed 00:06:53.826 Test: blockdev write zeroes read split partial ...passed 00:06:53.826 Test: blockdev reset ...[2024-11-18 06:40:46.850585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:53.826 passed 00:06:53.826 Test: blockdev write read 8 blocks ...[2024-11-18 06:40:46.853658] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:53.826 passed 00:06:53.826 Test: blockdev write read size > 128k ...passed 00:06:53.826 Test: blockdev write read invalid size ...passed 00:06:53.826 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:53.827 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:53.827 Test: blockdev write read max offset ...passed 00:06:53.827 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:53.827 Test: blockdev writev readv 8 blocks ...passed 00:06:53.827 Test: blockdev writev readv 30 x 1block ...passed 00:06:53.827 Test: blockdev writev readv block ...passed 00:06:53.827 Test: blockdev writev readv size > 128k ...passed 00:06:53.827 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:53.827 Test: blockdev comparev and writev ...[2024-11-18 06:40:46.869224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d3206000 len:0x1000 00:06:53.827 [2024-11-18 06:40:46.869275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:53.827 passed 00:06:53.827 Test: blockdev nvme passthru rw ...passed 00:06:53.827 Test: blockdev nvme passthru vendor specific ...passed 00:06:53.827 Test: blockdev nvme admin passthru ...[2024-11-18 06:40:46.871570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:53.827 [2024-11-18 06:40:46.871610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:53.827 passed 00:06:53.827 Test: blockdev copy ...passed 00:06:53.827 Suite: bdevio tests on: Nvme2n3 00:06:53.827 Test: blockdev write read block ...passed 00:06:53.827 Test: blockdev write zeroes read block ...passed 00:06:53.827 Test: blockdev write zeroes read no split ...passed 00:06:53.827 Test: blockdev write zeroes read split ...passed 00:06:53.827 Test: blockdev write zeroes read split partial ...passed 00:06:53.827 Test: blockdev reset ...[2024-11-18 06:40:46.901319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:53.827 [2024-11-18 06:40:46.904413] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:53.827 Test: blockdev write read 8 blocks ...uccessful. 00:06:53.827 passed 00:06:53.827 Test: blockdev write read size > 128k ...passed 00:06:53.827 Test: blockdev write read invalid size ...passed 00:06:53.827 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:53.827 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:53.827 Test: blockdev write read max offset ...passed 00:06:54.089 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:54.089 Test: blockdev writev readv 8 blocks ...passed 00:06:54.089 Test: blockdev writev readv 30 x 1block ...passed 00:06:54.089 Test: blockdev writev readv block ...passed 00:06:54.089 Test: blockdev writev readv size > 128k ...passed 00:06:54.089 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:54.089 Test: blockdev comparev and writev ...[2024-11-18 06:40:46.919910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x308005000 len:0x1000 00:06:54.089 [2024-11-18 06:40:46.919962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:54.089 passed 00:06:54.089 Test: blockdev nvme passthru rw ...passed 00:06:54.089 Test: blockdev nvme passthru vendor specific ...passed 00:06:54.089 Test: blockdev nvme admin passthru ...[2024-11-18 06:40:46.921985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:54.089 [2024-11-18 06:40:46.922019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:54.089 passed 00:06:54.089 Test: blockdev copy ...passed 00:06:54.089 Suite: bdevio tests on: Nvme2n2 00:06:54.089 Test: blockdev write read block ...passed 00:06:54.089 Test: blockdev write zeroes read block ...passed 00:06:54.089 Test: blockdev write zeroes read no split ...passed 00:06:54.089 Test: blockdev write zeroes read split ...passed 00:06:54.089 Test: blockdev write zeroes read split partial ...passed 00:06:54.089 Test: blockdev reset ...[2024-11-18 06:40:46.950338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:54.089 [2024-11-18 06:40:46.955894] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:54.089 Test: blockdev write read 8 blocks ...uccessful. 00:06:54.089 passed 00:06:54.089 Test: blockdev write read size > 128k ...passed 00:06:54.089 Test: blockdev write read invalid size ...passed 00:06:54.090 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:54.090 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:54.090 Test: blockdev write read max offset ...passed 00:06:54.090 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:54.090 Test: blockdev writev readv 8 blocks ...passed 00:06:54.090 Test: blockdev writev readv 30 x 1block ...passed 00:06:54.090 Test: blockdev writev readv block ...passed 00:06:54.090 Test: blockdev writev readv size > 128k ...passed 00:06:54.090 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:54.090 Test: blockdev comparev and writev ...[2024-11-18 06:40:46.971192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:06:54.090 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2e9636000 len:0x1000 00:06:54.090 [2024-11-18 06:40:46.971338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:54.090 passed 00:06:54.090 Test: blockdev nvme passthru vendor specific ...passed 00:06:54.090 Test: blockdev nvme admin passthru ...[2024-11-18 06:40:46.973319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:54.090 [2024-11-18 06:40:46.973360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:54.090 passed 00:06:54.090 Test: blockdev copy ...passed 00:06:54.090 Suite: bdevio tests on: Nvme2n1 00:06:54.090 Test: blockdev write read block ...passed 00:06:54.090 Test: blockdev write zeroes read block ...passed 00:06:54.090 Test: blockdev write zeroes read no split ...passed 00:06:54.090 Test: blockdev write zeroes read split ...passed 00:06:54.090 Test: blockdev write zeroes read split partial ...passed 00:06:54.090 Test: blockdev reset ...[2024-11-18 06:40:46.999069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:54.090 passed 00:06:54.090 Test: blockdev write read 8 blocks ...[2024-11-18 06:40:47.002214] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:54.090 passed 00:06:54.090 Test: blockdev write read size > 128k ...passed 00:06:54.090 Test: blockdev write read invalid size ...passed 00:06:54.090 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:54.090 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:54.090 Test: blockdev write read max offset ...passed 00:06:54.090 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:54.090 Test: blockdev writev readv 8 blocks ...passed 00:06:54.090 Test: blockdev writev readv 30 x 1block ...passed 00:06:54.090 Test: blockdev writev readv block ...passed 00:06:54.090 Test: blockdev writev readv size > 128k ...passed 00:06:54.090 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:54.090 Test: blockdev comparev and writev ...[2024-11-18 06:40:47.015486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e9630000 len:0x1000 00:06:54.090 [2024-11-18 06:40:47.015530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:54.090 passed 00:06:54.090 Test: blockdev nvme passthru rw ...passed 00:06:54.090 Test: blockdev nvme passthru vendor specific ...passed 00:06:54.090 Test: blockdev nvme admin passthru ...[2024-11-18 06:40:47.017442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:54.090 [2024-11-18 06:40:47.017478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:54.090 passed 00:06:54.090 Test: blockdev copy ...passed 00:06:54.090 Suite: bdevio tests on: Nvme1n1 00:06:54.090 Test: blockdev write read block ...passed 00:06:54.090 Test: blockdev write zeroes read block ...passed 00:06:54.090 Test: blockdev write zeroes read no split ...passed 00:06:54.090 Test: blockdev write zeroes read split ...passed 00:06:54.090 Test: blockdev write zeroes read split partial ...passed 00:06:54.090 Test: blockdev reset ...[2024-11-18 06:40:47.038650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:54.090 passed 00:06:54.090 Test: blockdev write read 8 blocks ...[2024-11-18 06:40:47.041624] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:54.090 passed 00:06:54.090 Test: blockdev write read size > 128k ...passed 00:06:54.090 Test: blockdev write read invalid size ...passed 00:06:54.090 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:54.090 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:54.090 Test: blockdev write read max offset ...passed 00:06:54.090 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:54.090 Test: blockdev writev readv 8 blocks ...passed 00:06:54.090 Test: blockdev writev readv 30 x 1block ...passed 00:06:54.090 Test: blockdev writev readv block ...passed 00:06:54.090 Test: blockdev writev readv size > 128k ...passed 00:06:54.090 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:54.090 Test: blockdev comparev and writev ...[2024-11-18 06:40:47.056840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e962c000 len:0x1000 00:06:54.090 [2024-11-18 06:40:47.056883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:54.090 passed 00:06:54.090 Test: blockdev nvme passthru rw ...passed 00:06:54.090 Test: blockdev nvme passthru vendor specific ...passed 00:06:54.090 Test: blockdev nvme admin passthru ...[2024-11-18 06:40:47.059020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:54.090 [2024-11-18 06:40:47.059057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:54.090 passed 00:06:54.090 Test: blockdev copy ...passed 00:06:54.090 Suite: bdevio tests on: Nvme0n1 00:06:54.090 Test: blockdev write read block ...passed 00:06:54.090 Test: blockdev write zeroes read block ...passed 00:06:54.090 Test: blockdev write zeroes read no split ...passed 00:06:54.090 Test: blockdev write zeroes read split ...passed 00:06:54.090 Test: blockdev write zeroes read split partial ...passed 00:06:54.090 Test: blockdev reset ...[2024-11-18 06:40:47.091240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:54.090 [2024-11-18 06:40:47.094106] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:06:54.090 Test: blockdev write read 8 blocks ...uccessful. 00:06:54.090 passed 00:06:54.090 Test: blockdev write read size > 128k ...passed 00:06:54.090 Test: blockdev write read invalid size ...passed 00:06:54.090 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:54.090 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:54.090 Test: blockdev write read max offset ...passed 00:06:54.090 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:54.090 Test: blockdev writev readv 8 blocks ...passed 00:06:54.090 Test: blockdev writev readv 30 x 1block ...passed 00:06:54.090 Test: blockdev writev readv block ...passed 00:06:54.090 Test: blockdev writev readv size > 128k ...passed 00:06:54.090 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:54.090 Test: blockdev comparev and writev ...passed 00:06:54.090 Test: blockdev nvme passthru rw ...[2024-11-18 06:40:47.107671] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:54.090 separate metadata which is not supported yet. 00:06:54.090 passed 00:06:54.090 Test: blockdev nvme passthru vendor specific ...passed 00:06:54.090 Test: blockdev nvme admin passthru ...[2024-11-18 06:40:47.109282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:54.090 [2024-11-18 06:40:47.109328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:54.090 passed 00:06:54.090 Test: blockdev copy ...passed 00:06:54.090 00:06:54.090 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.090 suites 6 6 n/a 0 0 00:06:54.090 tests 138 138 138 0 0 00:06:54.090 asserts 893 893 893 0 n/a 00:06:54.090 00:06:54.090 Elapsed time = 0.625 seconds 00:06:54.090 0 00:06:54.090 06:40:47 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71864 00:06:54.090 06:40:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 71864 ']' 00:06:54.090 06:40:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 71864 00:06:54.090 06:40:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:54.090 06:40:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.090 06:40:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71864 00:06:54.090 06:40:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:54.090 06:40:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:54.090 06:40:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71864' 00:06:54.090 killing process with pid 71864 00:06:54.090 06:40:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 71864 00:06:54.090 06:40:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 71864 00:06:54.351 06:40:47 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:54.351 00:06:54.351 real 0m1.475s 00:06:54.351 user 0m3.588s 00:06:54.351 sys 0m0.313s 00:06:54.351 06:40:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.351 06:40:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:54.351 ************************************ 00:06:54.351 END TEST bdev_bounds 00:06:54.351 ************************************ 00:06:54.351 06:40:47 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:54.351 06:40:47 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:54.351 06:40:47 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.351 06:40:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.351 ************************************ 00:06:54.351 START TEST bdev_nbd 00:06:54.351 ************************************ 00:06:54.351 06:40:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:54.351 06:40:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:54.351 06:40:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:54.351 06:40:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.351 06:40:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:54.351 06:40:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:54.351 06:40:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:54.351 06:40:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:54.351 06:40:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:54.351 06:40:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:54.351 06:40:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:54.351 06:40:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:54.351 06:40:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:54.351 06:40:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:54.351 06:40:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:54.351 06:40:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:54.351 06:40:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=71914 00:06:54.351 06:40:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:54.351 06:40:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 71914 /var/tmp/spdk-nbd.sock 00:06:54.351 06:40:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:54.351 06:40:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 71914 ']' 00:06:54.351 06:40:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:54.351 06:40:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.351 06:40:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:54.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:54.351 06:40:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.351 06:40:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:54.612 [2024-11-18 06:40:47.482606] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:54.612 [2024-11-18 06:40:47.482891] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:54.612 [2024-11-18 06:40:47.641910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.612 [2024-11-18 06:40:47.663365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.550 1+0 records in 00:06:55.550 1+0 records out 00:06:55.550 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104125 s, 3.9 MB/s 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:55.550 06:40:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:55.808 06:40:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:55.808 06:40:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:55.808 06:40:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:55.808 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:55.808 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:55.808 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.808 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.808 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:55.808 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:55.808 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.808 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.808 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.808 1+0 records in 00:06:55.808 1+0 records out 00:06:55.808 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407796 s, 10.0 MB/s 00:06:55.808 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.808 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:55.808 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.808 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.808 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:55.808 06:40:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:55.808 06:40:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:55.808 06:40:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:56.067 06:40:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:56.067 06:40:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:56.067 06:40:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:56.067 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:56.068 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:56.068 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:56.068 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:56.068 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:56.068 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:56.068 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:56.068 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:56.068 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.068 1+0 records in 00:06:56.068 1+0 records out 00:06:56.068 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103636 s, 4.0 MB/s 00:06:56.068 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.068 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:56.068 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.068 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:56.068 06:40:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:56.068 06:40:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:56.068 06:40:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:56.068 06:40:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:56.327 06:40:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:56.327 06:40:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:56.327 06:40:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:56.327 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:56.327 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:56.327 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:56.327 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:56.327 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:56.327 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:56.327 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:56.327 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:56.327 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.327 1+0 records in 00:06:56.327 1+0 records out 00:06:56.327 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347772 s, 11.8 MB/s 00:06:56.328 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.328 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:56.328 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.328 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:56.328 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:56.328 06:40:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:56.328 06:40:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:56.328 06:40:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:56.587 06:40:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:56.587 06:40:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:56.587 06:40:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:56.587 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:56.587 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:56.587 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:56.587 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:56.588 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:56.588 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:56.588 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:56.588 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:56.588 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.588 1+0 records in 00:06:56.588 1+0 records out 00:06:56.588 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488638 s, 8.4 MB/s 00:06:56.588 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.588 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:56.588 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.588 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:56.588 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:56.588 06:40:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:56.588 06:40:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:56.588 06:40:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:56.847 06:40:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:56.848 06:40:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:56.848 06:40:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:56.848 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:56.848 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:56.848 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:56.848 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:56.848 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:56.848 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:56.848 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:56.848 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:56.848 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.848 1+0 records in 00:06:56.848 1+0 records out 00:06:56.848 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105932 s, 3.9 MB/s 00:06:56.848 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.848 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:56.848 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.848 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:56.848 06:40:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:56.848 06:40:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:56.848 06:40:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:56.848 06:40:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:56.848 06:40:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:56.848 { 00:06:56.848 "nbd_device": "/dev/nbd0", 00:06:56.848 "bdev_name": "Nvme0n1" 00:06:56.848 }, 00:06:56.848 { 00:06:56.848 "nbd_device": "/dev/nbd1", 00:06:56.848 "bdev_name": "Nvme1n1" 00:06:56.848 }, 00:06:56.848 { 00:06:56.848 "nbd_device": "/dev/nbd2", 00:06:56.848 "bdev_name": "Nvme2n1" 00:06:56.848 }, 00:06:56.848 { 00:06:56.848 "nbd_device": "/dev/nbd3", 00:06:56.848 "bdev_name": "Nvme2n2" 00:06:56.848 }, 00:06:56.848 { 00:06:56.848 "nbd_device": "/dev/nbd4", 00:06:56.848 "bdev_name": "Nvme2n3" 00:06:56.848 }, 00:06:56.848 { 00:06:56.848 "nbd_device": "/dev/nbd5", 00:06:56.848 "bdev_name": "Nvme3n1" 00:06:56.848 } 00:06:56.848 ]' 00:06:56.848 06:40:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:56.848 06:40:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:56.848 { 00:06:56.848 "nbd_device": "/dev/nbd0", 00:06:56.848 "bdev_name": "Nvme0n1" 00:06:56.848 }, 00:06:56.848 { 00:06:56.848 "nbd_device": "/dev/nbd1", 00:06:56.848 "bdev_name": "Nvme1n1" 00:06:56.848 }, 00:06:56.848 { 00:06:56.848 "nbd_device": "/dev/nbd2", 00:06:56.848 "bdev_name": "Nvme2n1" 00:06:56.848 }, 00:06:56.848 { 00:06:56.848 "nbd_device": "/dev/nbd3", 00:06:56.848 "bdev_name": "Nvme2n2" 00:06:56.848 }, 00:06:56.848 { 00:06:56.848 "nbd_device": "/dev/nbd4", 00:06:56.848 "bdev_name": "Nvme2n3" 00:06:56.848 }, 00:06:56.848 { 00:06:56.848 "nbd_device": "/dev/nbd5", 00:06:56.848 "bdev_name": "Nvme3n1" 00:06:56.848 } 00:06:56.848 ]' 00:06:56.848 06:40:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:57.108 06:40:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:57.108 06:40:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.108 06:40:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:57.108 06:40:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:57.108 06:40:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:57.108 06:40:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.108 06:40:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:57.108 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:57.108 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:57.108 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:57.108 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.109 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.109 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:57.109 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.109 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.109 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.109 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:57.369 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:57.369 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:57.369 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:57.369 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.369 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.369 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:57.369 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.369 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.369 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.369 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:57.629 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:57.629 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:57.629 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:57.629 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.629 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.629 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:57.629 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.629 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.629 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.629 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:57.889 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:57.889 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:57.889 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:57.889 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.889 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.889 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:57.889 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.889 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.889 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.889 06:40:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:58.149 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:58.149 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:58.149 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:58.149 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.149 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.149 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:58.149 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.149 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.149 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.149 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:58.410 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:58.410 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:58.410 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:58.410 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.410 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.410 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:58.410 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.410 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.410 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:58.410 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.410 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:58.671 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:58.671 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:58.671 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:58.671 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:58.671 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:58.671 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:58.671 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:58.671 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:58.671 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:58.671 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:58.671 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:58.671 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:58.671 06:40:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:58.671 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.671 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:58.671 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:58.671 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:58.671 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:58.671 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:58.671 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.671 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:58.671 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:58.672 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:58.672 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:58.672 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:58.672 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:58.672 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:58.672 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:58.933 /dev/nbd0 00:06:58.933 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:58.933 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:58.933 06:40:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:58.933 06:40:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:58.933 06:40:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.933 06:40:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.933 06:40:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:58.933 06:40:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:58.933 06:40:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.933 06:40:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.933 06:40:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.933 1+0 records in 00:06:58.933 1+0 records out 00:06:58.933 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104898 s, 3.9 MB/s 00:06:58.933 06:40:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.933 06:40:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:58.933 06:40:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.933 06:40:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.933 06:40:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:58.933 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.933 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:58.933 06:40:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:59.195 /dev/nbd1 00:06:59.195 06:40:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:59.195 06:40:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:59.195 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:59.195 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:59.195 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.195 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.195 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:59.195 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:59.195 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.195 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.195 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.195 1+0 records in 00:06:59.195 1+0 records out 00:06:59.195 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000819797 s, 5.0 MB/s 00:06:59.195 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.195 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:59.195 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.195 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.195 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:59.195 06:40:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.195 06:40:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:59.195 06:40:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:59.195 /dev/nbd10 00:06:59.456 06:40:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:59.456 06:40:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:59.456 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:59.456 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:59.456 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.456 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.456 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:59.456 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:59.456 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.456 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.456 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.456 1+0 records in 00:06:59.456 1+0 records out 00:06:59.456 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010654 s, 3.8 MB/s 00:06:59.456 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.456 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:59.456 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.456 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.456 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:59.456 06:40:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.456 06:40:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:59.456 06:40:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:59.456 /dev/nbd11 00:06:59.456 06:40:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:59.456 06:40:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:59.456 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:59.456 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:59.456 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.456 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.456 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:59.716 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:59.716 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.716 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.716 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.716 1+0 records in 00:06:59.716 1+0 records out 00:06:59.716 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000768973 s, 5.3 MB/s 00:06:59.716 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.716 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:59.716 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.716 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.716 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:59.716 06:40:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.716 06:40:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:59.716 06:40:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:59.716 /dev/nbd12 00:06:59.716 06:40:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:59.716 06:40:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:59.716 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:59.716 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:59.716 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.716 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.716 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:59.716 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:59.716 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.716 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.716 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.716 1+0 records in 00:06:59.716 1+0 records out 00:06:59.716 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000885625 s, 4.6 MB/s 00:06:59.716 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.716 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:59.716 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.975 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.975 06:40:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:59.975 06:40:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.975 06:40:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:59.975 06:40:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:59.975 /dev/nbd13 00:06:59.975 06:40:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:59.975 06:40:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:59.975 06:40:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:59.975 06:40:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:59.975 06:40:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.975 06:40:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.975 06:40:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:59.975 06:40:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:59.975 06:40:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.975 06:40:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.975 06:40:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.975 1+0 records in 00:06:59.975 1+0 records out 00:06:59.975 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000743461 s, 5.5 MB/s 00:06:59.975 06:40:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.975 06:40:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:59.975 06:40:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.975 06:40:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.975 06:40:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:59.975 06:40:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.975 06:40:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:59.975 06:40:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:59.975 06:40:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.975 06:40:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.233 06:40:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:00.233 { 00:07:00.233 "nbd_device": "/dev/nbd0", 00:07:00.233 "bdev_name": "Nvme0n1" 00:07:00.233 }, 00:07:00.233 { 00:07:00.233 "nbd_device": "/dev/nbd1", 00:07:00.233 "bdev_name": "Nvme1n1" 00:07:00.233 }, 00:07:00.233 { 00:07:00.233 "nbd_device": "/dev/nbd10", 00:07:00.233 "bdev_name": "Nvme2n1" 00:07:00.233 }, 00:07:00.233 { 00:07:00.233 "nbd_device": "/dev/nbd11", 00:07:00.233 "bdev_name": "Nvme2n2" 00:07:00.233 }, 00:07:00.233 { 00:07:00.233 "nbd_device": "/dev/nbd12", 00:07:00.233 "bdev_name": "Nvme2n3" 00:07:00.233 }, 00:07:00.233 { 00:07:00.233 "nbd_device": "/dev/nbd13", 00:07:00.233 "bdev_name": "Nvme3n1" 00:07:00.233 } 00:07:00.233 ]' 00:07:00.233 06:40:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:00.233 { 00:07:00.233 "nbd_device": "/dev/nbd0", 00:07:00.233 "bdev_name": "Nvme0n1" 00:07:00.233 }, 00:07:00.233 { 00:07:00.233 "nbd_device": "/dev/nbd1", 00:07:00.233 "bdev_name": "Nvme1n1" 00:07:00.233 }, 00:07:00.233 { 00:07:00.233 "nbd_device": "/dev/nbd10", 00:07:00.233 "bdev_name": "Nvme2n1" 00:07:00.233 }, 00:07:00.233 { 00:07:00.233 "nbd_device": "/dev/nbd11", 00:07:00.233 "bdev_name": "Nvme2n2" 00:07:00.233 }, 00:07:00.233 { 00:07:00.233 "nbd_device": "/dev/nbd12", 00:07:00.233 "bdev_name": "Nvme2n3" 00:07:00.233 }, 00:07:00.233 { 00:07:00.233 "nbd_device": "/dev/nbd13", 00:07:00.233 "bdev_name": "Nvme3n1" 00:07:00.233 } 00:07:00.233 ]' 00:07:00.234 06:40:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.234 06:40:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:00.234 /dev/nbd1 00:07:00.234 /dev/nbd10 00:07:00.234 /dev/nbd11 00:07:00.234 /dev/nbd12 00:07:00.234 /dev/nbd13' 00:07:00.234 06:40:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.234 06:40:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:00.234 /dev/nbd1 00:07:00.234 /dev/nbd10 00:07:00.234 /dev/nbd11 00:07:00.234 /dev/nbd12 00:07:00.234 /dev/nbd13' 00:07:00.234 06:40:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:00.234 06:40:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:00.234 06:40:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:00.234 06:40:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:00.234 06:40:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:00.234 06:40:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:00.234 06:40:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:00.234 06:40:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:00.234 06:40:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:00.234 06:40:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:00.234 06:40:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:00.234 256+0 records in 00:07:00.234 256+0 records out 00:07:00.234 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00771903 s, 136 MB/s 00:07:00.234 06:40:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.234 06:40:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:00.491 256+0 records in 00:07:00.492 256+0 records out 00:07:00.492 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.177765 s, 5.9 MB/s 00:07:00.492 06:40:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.492 06:40:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:00.750 256+0 records in 00:07:00.750 256+0 records out 00:07:00.750 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.224036 s, 4.7 MB/s 00:07:00.750 06:40:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.750 06:40:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:01.007 256+0 records in 00:07:01.007 256+0 records out 00:07:01.007 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.220787 s, 4.7 MB/s 00:07:01.007 06:40:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.007 06:40:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:01.265 256+0 records in 00:07:01.265 256+0 records out 00:07:01.265 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.225307 s, 4.7 MB/s 00:07:01.265 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.265 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:01.524 256+0 records in 00:07:01.524 256+0 records out 00:07:01.524 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.201308 s, 5.2 MB/s 00:07:01.524 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.524 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:01.524 256+0 records in 00:07:01.524 256+0 records out 00:07:01.524 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.218461 s, 4.8 MB/s 00:07:01.524 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:01.524 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:01.524 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:01.524 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:01.524 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:01.524 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:01.524 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:01.524 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.524 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:01.524 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.524 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:01.524 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.524 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:01.524 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.524 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:01.524 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.524 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:01.783 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.783 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:01.783 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:01.783 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:01.783 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.783 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:01.783 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:01.783 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:01.783 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.783 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:01.783 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:01.783 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:01.783 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:01.783 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.783 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.783 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:01.783 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:01.783 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.783 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.783 06:40:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:02.041 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:02.041 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:02.041 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:02.041 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.041 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.041 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:02.041 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.041 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.041 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.041 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:02.299 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:02.299 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:02.299 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:02.299 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.299 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.299 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:02.299 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.299 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.299 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.299 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:02.558 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:02.558 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:02.558 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:02.558 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.558 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.558 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:02.558 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.558 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.558 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.558 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:02.816 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:02.816 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:02.816 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:02.816 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.816 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.816 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:02.816 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.816 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.816 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.816 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:02.816 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:02.816 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:02.816 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:02.816 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.816 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.816 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:02.816 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.816 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.075 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:03.075 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.075 06:40:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:03.075 06:40:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:03.075 06:40:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:03.075 06:40:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.075 06:40:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:03.075 06:40:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:03.075 06:40:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.075 06:40:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:03.075 06:40:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:03.075 06:40:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:03.075 06:40:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:03.075 06:40:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:03.075 06:40:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:03.075 06:40:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:03.075 06:40:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.075 06:40:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:03.075 06:40:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:03.333 malloc_lvol_verify 00:07:03.333 06:40:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:03.591 99706607-ca5b-44b9-918f-47bf8c2b7a33 00:07:03.591 06:40:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:03.848 c4e51f02-cfec-4210-bb3f-52fc1283773d 00:07:03.848 06:40:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:04.106 /dev/nbd0 00:07:04.106 06:40:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:04.106 06:40:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:04.106 06:40:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:04.106 06:40:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:04.106 06:40:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:04.106 mke2fs 1.47.0 (5-Feb-2023) 00:07:04.106 Discarding device blocks: 0/4096 done 00:07:04.106 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:04.106 00:07:04.106 Allocating group tables: 0/1 done 00:07:04.106 Writing inode tables: 0/1 done 00:07:04.106 Creating journal (1024 blocks): done 00:07:04.106 Writing superblocks and filesystem accounting information: 0/1 done 00:07:04.106 00:07:04.106 06:40:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:04.106 06:40:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.106 06:40:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:04.106 06:40:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:04.106 06:40:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:04.106 06:40:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.106 06:40:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:04.106 06:40:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:04.364 06:40:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:04.364 06:40:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:04.364 06:40:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.364 06:40:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.364 06:40:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:04.364 06:40:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.364 06:40:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.364 06:40:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 71914 00:07:04.364 06:40:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 71914 ']' 00:07:04.364 06:40:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 71914 00:07:04.364 06:40:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:04.364 06:40:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.364 06:40:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71914 00:07:04.364 killing process with pid 71914 00:07:04.364 06:40:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:04.364 06:40:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:04.364 06:40:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71914' 00:07:04.364 06:40:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 71914 00:07:04.364 06:40:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 71914 00:07:04.364 ************************************ 00:07:04.364 END TEST bdev_nbd 00:07:04.364 ************************************ 00:07:04.364 06:40:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:04.364 00:07:04.364 real 0m9.966s 00:07:04.364 user 0m13.958s 00:07:04.364 sys 0m3.358s 00:07:04.364 06:40:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.364 06:40:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:04.364 skipping fio tests on NVMe due to multi-ns failures. 00:07:04.364 06:40:57 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:04.364 06:40:57 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:07:04.364 06:40:57 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:04.364 06:40:57 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:04.364 06:40:57 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:04.364 06:40:57 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:04.364 06:40:57 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.364 06:40:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:04.364 ************************************ 00:07:04.364 START TEST bdev_verify 00:07:04.364 ************************************ 00:07:04.364 06:40:57 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:04.622 [2024-11-18 06:40:57.499686] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:04.622 [2024-11-18 06:40:57.499794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72286 ] 00:07:04.622 [2024-11-18 06:40:57.652812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:04.622 [2024-11-18 06:40:57.670034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.622 [2024-11-18 06:40:57.670041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.188 Running I/O for 5 seconds... 00:07:07.127 26176.00 IOPS, 102.25 MiB/s [2024-11-18T06:41:01.597Z] 25664.00 IOPS, 100.25 MiB/s [2024-11-18T06:41:02.532Z] 25173.33 IOPS, 98.33 MiB/s [2024-11-18T06:41:03.468Z] 24256.00 IOPS, 94.75 MiB/s [2024-11-18T06:41:03.468Z] 23769.60 IOPS, 92.85 MiB/s 00:07:10.381 Latency(us) 00:07:10.381 [2024-11-18T06:41:03.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:10.381 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:10.381 Verification LBA range: start 0x0 length 0xbd0bd 00:07:10.381 Nvme0n1 : 5.05 2053.94 8.02 0.00 0.00 62171.28 9830.40 61301.37 00:07:10.381 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:10.381 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:10.381 Nvme0n1 : 5.08 1866.20 7.29 0.00 0.00 67821.70 7662.67 66140.95 00:07:10.381 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:10.381 Verification LBA range: start 0x0 length 0xa0000 00:07:10.381 Nvme1n1 : 5.05 2053.33 8.02 0.00 0.00 62101.24 11846.89 58074.98 00:07:10.381 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:10.381 Verification LBA range: start 0xa0000 length 0xa0000 00:07:10.381 Nvme1n1 : 5.05 1861.95 7.27 0.00 0.00 68438.63 7763.50 63317.86 00:07:10.381 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:10.381 Verification LBA range: start 0x0 length 0x80000 00:07:10.381 Nvme2n1 : 5.05 2052.74 8.02 0.00 0.00 62028.32 12804.73 56461.78 00:07:10.381 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:10.381 Verification LBA range: start 0x80000 length 0x80000 00:07:10.381 Nvme2n1 : 5.07 1868.17 7.30 0.00 0.00 68201.97 13006.38 62914.56 00:07:10.381 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:10.381 Verification LBA range: start 0x0 length 0x80000 00:07:10.381 Nvme2n2 : 5.05 2052.03 8.02 0.00 0.00 61942.79 13510.50 56058.49 00:07:10.381 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:10.381 Verification LBA range: start 0x80000 length 0x80000 00:07:10.381 Nvme2n2 : 5.07 1867.68 7.30 0.00 0.00 68097.31 13308.85 62511.26 00:07:10.381 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:10.381 Verification LBA range: start 0x0 length 0x80000 00:07:10.381 Nvme2n3 : 5.06 2050.66 8.01 0.00 0.00 61890.16 11947.72 57671.68 00:07:10.381 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:10.381 Verification LBA range: start 0x80000 length 0x80000 00:07:10.381 Nvme2n3 : 5.07 1867.19 7.29 0.00 0.00 67962.27 13510.50 60494.77 00:07:10.381 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:10.381 Verification LBA range: start 0x0 length 0x20000 00:07:10.381 Nvme3n1 : 5.06 2050.06 8.01 0.00 0.00 61827.86 6200.71 59284.87 00:07:10.381 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:10.381 Verification LBA range: start 0x20000 length 0x20000 00:07:10.381 Nvme3n1 : 5.07 1866.69 7.29 0.00 0.00 67862.55 10183.29 63317.86 00:07:10.381 [2024-11-18T06:41:03.468Z] =================================================================================================================== 00:07:10.381 [2024-11-18T06:41:03.468Z] Total : 23510.65 91.84 0.00 0.00 64889.86 6200.71 66140.95 00:07:10.947 00:07:10.947 real 0m6.329s 00:07:10.947 user 0m11.997s 00:07:10.947 sys 0m0.179s 00:07:10.947 ************************************ 00:07:10.947 END TEST bdev_verify 00:07:10.947 ************************************ 00:07:10.947 06:41:03 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.947 06:41:03 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:10.947 06:41:03 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:10.947 06:41:03 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:10.947 06:41:03 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.947 06:41:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:10.947 ************************************ 00:07:10.947 START TEST bdev_verify_big_io 00:07:10.947 ************************************ 00:07:10.947 06:41:03 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:10.947 [2024-11-18 06:41:03.887733] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:10.947 [2024-11-18 06:41:03.887839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72379 ] 00:07:11.207 [2024-11-18 06:41:04.046362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:11.207 [2024-11-18 06:41:04.066543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.207 [2024-11-18 06:41:04.066654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.468 Running I/O for 5 seconds... 00:07:15.900 16.00 IOPS, 1.00 MiB/s [2024-11-18T06:41:10.360Z] 1356.50 IOPS, 84.78 MiB/s [2024-11-18T06:41:10.619Z] 2147.33 IOPS, 134.21 MiB/s 00:07:17.532 Latency(us) 00:07:17.532 [2024-11-18T06:41:10.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:17.532 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:17.532 Verification LBA range: start 0x0 length 0xbd0b 00:07:17.532 Nvme0n1 : 5.56 126.61 7.91 0.00 0.00 968945.61 18652.55 1187310.67 00:07:17.532 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:17.532 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:17.532 Nvme0n1 : 5.71 130.07 8.13 0.00 0.00 938397.83 15526.99 1200216.22 00:07:17.532 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:17.532 Verification LBA range: start 0x0 length 0xa000 00:07:17.532 Nvme1n1 : 5.74 129.96 8.12 0.00 0.00 910340.59 66947.54 993727.41 00:07:17.532 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:17.532 Verification LBA range: start 0xa000 length 0xa000 00:07:17.532 Nvme1n1 : 5.71 134.40 8.40 0.00 0.00 889141.83 105664.20 987274.63 00:07:17.532 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:17.532 Verification LBA range: start 0x0 length 0x8000 00:07:17.532 Nvme2n1 : 5.74 133.76 8.36 0.00 0.00 863125.92 109697.18 987274.63 00:07:17.532 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:17.532 Verification LBA range: start 0x8000 length 0x8000 00:07:17.532 Nvme2n1 : 5.72 134.36 8.40 0.00 0.00 854936.02 108890.58 784012.21 00:07:17.532 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:17.532 Verification LBA range: start 0x0 length 0x8000 00:07:17.532 Nvme2n2 : 5.86 141.87 8.87 0.00 0.00 788923.17 39724.90 1019538.51 00:07:17.532 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:17.532 Verification LBA range: start 0x8000 length 0x8000 00:07:17.532 Nvme2n2 : 5.93 146.95 9.18 0.00 0.00 756901.03 30247.38 896935.78 00:07:17.532 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:17.532 Verification LBA range: start 0x0 length 0x8000 00:07:17.532 Nvme2n3 : 5.94 150.94 9.43 0.00 0.00 716787.96 41338.09 1045349.61 00:07:17.532 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:17.532 Verification LBA range: start 0x8000 length 0x8000 00:07:17.532 Nvme2n3 : 5.94 142.70 8.92 0.00 0.00 754179.79 41741.39 1935832.62 00:07:17.532 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:17.532 Verification LBA range: start 0x0 length 0x2000 00:07:17.532 Nvme3n1 : 6.01 170.51 10.66 0.00 0.00 614519.28 428.50 1071160.71 00:07:17.532 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:17.532 Verification LBA range: start 0x2000 length 0x2000 00:07:17.532 Nvme3n1 : 6.03 177.89 11.12 0.00 0.00 586175.83 275.69 1961643.72 00:07:17.532 [2024-11-18T06:41:10.620Z] =================================================================================================================== 00:07:17.533 [2024-11-18T06:41:10.620Z] Total : 1720.03 107.50 0.00 0.00 788435.87 275.69 1961643.72 00:07:18.909 00:07:18.909 real 0m7.978s 00:07:18.909 user 0m15.240s 00:07:18.909 sys 0m0.226s 00:07:18.909 06:41:11 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.909 ************************************ 00:07:18.909 END TEST bdev_verify_big_io 00:07:18.909 06:41:11 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:18.909 ************************************ 00:07:18.909 06:41:11 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:18.909 06:41:11 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:18.909 06:41:11 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.909 06:41:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:18.909 ************************************ 00:07:18.909 START TEST bdev_write_zeroes 00:07:18.909 ************************************ 00:07:18.909 06:41:11 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:18.909 [2024-11-18 06:41:11.922725] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:18.909 [2024-11-18 06:41:11.922857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72482 ] 00:07:19.170 [2024-11-18 06:41:12.078139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.170 [2024-11-18 06:41:12.101462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.430 Running I/O for 1 seconds... 00:07:20.815 64896.00 IOPS, 253.50 MiB/s 00:07:20.815 Latency(us) 00:07:20.815 [2024-11-18T06:41:13.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:20.815 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:20.815 Nvme0n1 : 1.02 10777.72 42.10 0.00 0.00 11855.19 5192.47 23492.14 00:07:20.815 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:20.815 Nvme1n1 : 1.02 10765.24 42.05 0.00 0.00 11855.95 8620.50 21576.47 00:07:20.815 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:20.815 Nvme2n1 : 1.02 10753.03 42.00 0.00 0.00 11841.33 8368.44 20164.92 00:07:20.815 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:20.815 Nvme2n2 : 1.02 10740.91 41.96 0.00 0.00 11839.20 8469.27 19761.62 00:07:20.815 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:20.815 Nvme2n3 : 1.03 10728.60 41.91 0.00 0.00 11817.01 7864.32 20769.87 00:07:20.815 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:20.815 Nvme3n1 : 1.03 10716.52 41.86 0.00 0.00 11812.44 7713.08 22483.89 00:07:20.815 [2024-11-18T06:41:13.902Z] =================================================================================================================== 00:07:20.815 [2024-11-18T06:41:13.902Z] Total : 64482.02 251.88 0.00 0.00 11836.85 5192.47 23492.14 00:07:20.815 00:07:20.815 real 0m1.820s 00:07:20.815 user 0m1.528s 00:07:20.815 sys 0m0.177s 00:07:20.815 ************************************ 00:07:20.815 END TEST bdev_write_zeroes 00:07:20.815 ************************************ 00:07:20.815 06:41:13 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.815 06:41:13 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:20.815 06:41:13 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:20.815 06:41:13 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:20.815 06:41:13 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.815 06:41:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:20.815 ************************************ 00:07:20.815 START TEST bdev_json_nonenclosed 00:07:20.815 ************************************ 00:07:20.815 06:41:13 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:20.815 [2024-11-18 06:41:13.800727] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:20.815 [2024-11-18 06:41:13.800880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72519 ] 00:07:21.077 [2024-11-18 06:41:13.962053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.077 [2024-11-18 06:41:13.991439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.077 [2024-11-18 06:41:13.991545] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:21.077 [2024-11-18 06:41:13.991563] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:21.077 [2024-11-18 06:41:13.991579] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.077 00:07:21.077 real 0m0.338s 00:07:21.077 user 0m0.125s 00:07:21.077 sys 0m0.108s 00:07:21.077 ************************************ 00:07:21.077 END TEST bdev_json_nonenclosed 00:07:21.077 06:41:14 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.077 06:41:14 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:21.077 ************************************ 00:07:21.077 06:41:14 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:21.077 06:41:14 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:21.077 06:41:14 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.077 06:41:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:21.077 ************************************ 00:07:21.077 START TEST bdev_json_nonarray 00:07:21.077 ************************************ 00:07:21.077 06:41:14 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:21.338 [2024-11-18 06:41:14.204412] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:21.338 [2024-11-18 06:41:14.204563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72544 ] 00:07:21.338 [2024-11-18 06:41:14.365968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.338 [2024-11-18 06:41:14.394939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.338 [2024-11-18 06:41:14.395074] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:21.338 [2024-11-18 06:41:14.395092] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:21.338 [2024-11-18 06:41:14.395104] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.601 00:07:21.601 real 0m0.336s 00:07:21.601 user 0m0.116s 00:07:21.601 sys 0m0.117s 00:07:21.601 06:41:14 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.601 06:41:14 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:21.601 ************************************ 00:07:21.601 END TEST bdev_json_nonarray 00:07:21.601 ************************************ 00:07:21.601 06:41:14 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:21.601 06:41:14 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:21.601 06:41:14 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:21.601 06:41:14 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:21.601 06:41:14 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:21.601 06:41:14 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:21.601 06:41:14 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:21.601 06:41:14 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:21.601 06:41:14 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:21.601 06:41:14 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:21.601 06:41:14 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:21.601 00:07:21.601 real 0m31.485s 00:07:21.601 user 0m49.088s 00:07:21.601 sys 0m5.461s 00:07:21.601 06:41:14 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.601 06:41:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:21.601 ************************************ 00:07:21.601 END TEST blockdev_nvme 00:07:21.601 ************************************ 00:07:21.601 06:41:14 -- spdk/autotest.sh@209 -- # uname -s 00:07:21.601 06:41:14 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:21.601 06:41:14 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:21.601 06:41:14 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:21.601 06:41:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.601 06:41:14 -- common/autotest_common.sh@10 -- # set +x 00:07:21.601 ************************************ 00:07:21.601 START TEST blockdev_nvme_gpt 00:07:21.601 ************************************ 00:07:21.601 06:41:14 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:21.601 * Looking for test storage... 00:07:21.601 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:21.601 06:41:14 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:21.601 06:41:14 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:07:21.601 06:41:14 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:21.863 06:41:14 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:21.863 06:41:14 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:21.863 06:41:14 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:21.863 06:41:14 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:21.863 06:41:14 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.863 06:41:14 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:21.863 06:41:14 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:21.863 06:41:14 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:21.863 06:41:14 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:21.863 06:41:14 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:21.863 06:41:14 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:21.863 06:41:14 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:21.863 06:41:14 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:21.863 06:41:14 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:21.863 06:41:14 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:21.863 06:41:14 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.863 06:41:14 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:21.863 06:41:14 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:21.863 06:41:14 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.863 06:41:14 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:21.863 06:41:14 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:21.863 06:41:14 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:21.863 06:41:14 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:21.863 06:41:14 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.863 06:41:14 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:21.863 06:41:14 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:21.863 06:41:14 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:21.863 06:41:14 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:21.863 06:41:14 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:21.863 06:41:14 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.863 06:41:14 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:21.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.863 --rc genhtml_branch_coverage=1 00:07:21.863 --rc genhtml_function_coverage=1 00:07:21.864 --rc genhtml_legend=1 00:07:21.864 --rc geninfo_all_blocks=1 00:07:21.864 --rc geninfo_unexecuted_blocks=1 00:07:21.864 00:07:21.864 ' 00:07:21.864 06:41:14 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:21.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.864 --rc genhtml_branch_coverage=1 00:07:21.864 --rc genhtml_function_coverage=1 00:07:21.864 --rc genhtml_legend=1 00:07:21.864 --rc geninfo_all_blocks=1 00:07:21.864 --rc geninfo_unexecuted_blocks=1 00:07:21.864 00:07:21.864 ' 00:07:21.864 06:41:14 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:21.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.864 --rc genhtml_branch_coverage=1 00:07:21.864 --rc genhtml_function_coverage=1 00:07:21.864 --rc genhtml_legend=1 00:07:21.864 --rc geninfo_all_blocks=1 00:07:21.864 --rc geninfo_unexecuted_blocks=1 00:07:21.864 00:07:21.864 ' 00:07:21.864 06:41:14 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:21.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.864 --rc genhtml_branch_coverage=1 00:07:21.864 --rc genhtml_function_coverage=1 00:07:21.864 --rc genhtml_legend=1 00:07:21.864 --rc geninfo_all_blocks=1 00:07:21.864 --rc geninfo_unexecuted_blocks=1 00:07:21.864 00:07:21.864 ' 00:07:21.864 06:41:14 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:21.864 06:41:14 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:21.864 06:41:14 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:21.864 06:41:14 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:21.864 06:41:14 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:21.864 06:41:14 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:21.864 06:41:14 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:21.864 06:41:14 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:21.864 06:41:14 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:21.864 06:41:14 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:21.864 06:41:14 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:21.864 06:41:14 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:21.864 06:41:14 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:21.864 06:41:14 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:21.864 06:41:14 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:21.864 06:41:14 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:21.864 06:41:14 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:21.864 06:41:14 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:21.864 06:41:14 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:21.864 06:41:14 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:21.864 06:41:14 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:21.864 06:41:14 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:21.864 06:41:14 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:21.864 06:41:14 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:21.864 06:41:14 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72623 00:07:21.864 06:41:14 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:21.864 06:41:14 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 72623 00:07:21.864 06:41:14 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 72623 ']' 00:07:21.864 06:41:14 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.864 06:41:14 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:21.864 06:41:14 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.864 06:41:14 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.864 06:41:14 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.864 06:41:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:21.864 [2024-11-18 06:41:14.808732] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:21.864 [2024-11-18 06:41:14.808848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72623 ] 00:07:22.125 [2024-11-18 06:41:14.965920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.125 [2024-11-18 06:41:14.992630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.694 06:41:15 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.694 06:41:15 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:07:22.694 06:41:15 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:22.694 06:41:15 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:22.694 06:41:15 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:22.956 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:23.217 Waiting for block devices as requested 00:07:23.217 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:23.217 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:23.217 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:23.478 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:28.831 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:28.831 06:41:21 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:28.831 06:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:28.831 06:41:21 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:28.831 06:41:21 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:28.831 06:41:21 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:28.831 06:41:21 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:28.831 06:41:21 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:28.831 06:41:21 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:28.831 06:41:21 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:28.831 06:41:21 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:28.831 BYT; 00:07:28.831 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:28.831 06:41:21 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:28.831 BYT; 00:07:28.831 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:28.831 06:41:21 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:28.831 06:41:21 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:28.831 06:41:21 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:28.831 06:41:21 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:28.831 06:41:21 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:28.831 06:41:21 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:28.831 06:41:21 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:28.831 06:41:21 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:28.831 06:41:21 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:28.832 06:41:21 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:28.832 06:41:21 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:28.832 06:41:21 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:28.832 06:41:21 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:28.832 06:41:21 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:28.832 06:41:21 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:28.832 06:41:21 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:28.832 06:41:21 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:28.832 06:41:21 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:28.832 06:41:21 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:28.832 06:41:21 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:28.832 06:41:21 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:28.832 06:41:21 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:28.832 06:41:21 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:28.832 06:41:21 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:28.832 06:41:21 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:28.832 06:41:21 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:28.832 06:41:21 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:28.832 06:41:21 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:28.832 06:41:21 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:29.774 The operation has completed successfully. 00:07:29.774 06:41:22 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:30.718 The operation has completed successfully. 00:07:30.718 06:41:23 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:31.290 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:31.550 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:31.812 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:31.812 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:31.812 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:31.812 06:41:24 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:31.812 06:41:24 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.812 06:41:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:31.812 [] 00:07:31.812 06:41:24 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.812 06:41:24 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:31.812 06:41:24 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:31.812 06:41:24 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:31.812 06:41:24 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:31.812 06:41:24 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:31.812 06:41:24 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.812 06:41:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:32.073 06:41:25 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.073 06:41:25 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:32.073 06:41:25 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.073 06:41:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:32.073 06:41:25 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.073 06:41:25 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:32.073 06:41:25 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:32.073 06:41:25 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.073 06:41:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:32.335 06:41:25 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.335 06:41:25 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:32.335 06:41:25 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.335 06:41:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:32.335 06:41:25 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.335 06:41:25 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:32.335 06:41:25 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.335 06:41:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:32.335 06:41:25 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.335 06:41:25 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:32.335 06:41:25 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:32.335 06:41:25 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:32.335 06:41:25 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.335 06:41:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:32.335 06:41:25 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.335 06:41:25 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:32.335 06:41:25 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:32.336 06:41:25 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "e0f02201-93bb-43d5-8c10-fcab70f93d77"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "e0f02201-93bb-43d5-8c10-fcab70f93d77",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "0c158c4f-af9c-4db2-af97-f963721a3135"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0c158c4f-af9c-4db2-af97-f963721a3135",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "6a955f0f-e2f9-4703-8a2c-091e5938c7ca"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6a955f0f-e2f9-4703-8a2c-091e5938c7ca",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "7f012215-1ae8-4e3d-be9c-9cc566fa27a1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7f012215-1ae8-4e3d-be9c-9cc566fa27a1",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "478224cf-8070-4b1d-b238-917916fe4af8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "478224cf-8070-4b1d-b238-917916fe4af8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:32.336 06:41:25 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:32.336 06:41:25 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:32.336 06:41:25 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:32.336 06:41:25 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 72623 00:07:32.336 06:41:25 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 72623 ']' 00:07:32.336 06:41:25 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 72623 00:07:32.336 06:41:25 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:07:32.336 06:41:25 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.336 06:41:25 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72623 00:07:32.336 killing process with pid 72623 00:07:32.336 06:41:25 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:32.336 06:41:25 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:32.336 06:41:25 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72623' 00:07:32.336 06:41:25 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 72623 00:07:32.336 06:41:25 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 72623 00:07:32.597 06:41:25 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:32.597 06:41:25 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:32.597 06:41:25 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:32.597 06:41:25 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.597 06:41:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:32.597 ************************************ 00:07:32.597 START TEST bdev_hello_world 00:07:32.597 ************************************ 00:07:32.597 06:41:25 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:32.597 [2024-11-18 06:41:25.630849] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:32.597 [2024-11-18 06:41:25.630948] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73238 ] 00:07:32.858 [2024-11-18 06:41:25.771409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.858 [2024-11-18 06:41:25.787770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.119 [2024-11-18 06:41:26.144091] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:33.120 [2024-11-18 06:41:26.144138] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:33.120 [2024-11-18 06:41:26.144158] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:33.120 [2024-11-18 06:41:26.146234] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:33.120 [2024-11-18 06:41:26.147139] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:33.120 [2024-11-18 06:41:26.147171] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:33.120 [2024-11-18 06:41:26.147648] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:33.120 00:07:33.120 [2024-11-18 06:41:26.147676] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:33.380 00:07:33.380 real 0m0.710s 00:07:33.380 user 0m0.472s 00:07:33.380 sys 0m0.135s 00:07:33.380 06:41:26 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.380 ************************************ 00:07:33.380 END TEST bdev_hello_world 00:07:33.380 ************************************ 00:07:33.380 06:41:26 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:33.380 06:41:26 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:33.380 06:41:26 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:33.380 06:41:26 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.380 06:41:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:33.380 ************************************ 00:07:33.380 START TEST bdev_bounds 00:07:33.380 ************************************ 00:07:33.380 06:41:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:33.380 06:41:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73269 00:07:33.380 06:41:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:33.380 06:41:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73269' 00:07:33.380 Process bdevio pid: 73269 00:07:33.380 06:41:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73269 00:07:33.380 06:41:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 73269 ']' 00:07:33.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.380 06:41:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.380 06:41:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:33.380 06:41:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.380 06:41:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:33.380 06:41:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:33.380 06:41:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:33.380 [2024-11-18 06:41:26.415742] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:33.380 [2024-11-18 06:41:26.415864] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73269 ] 00:07:33.642 [2024-11-18 06:41:26.567723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:33.642 [2024-11-18 06:41:26.589672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.642 [2024-11-18 06:41:26.590009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.642 [2024-11-18 06:41:26.590012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.213 06:41:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.213 06:41:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:34.213 06:41:27 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:34.475 I/O targets: 00:07:34.475 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:34.475 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:34.475 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:34.475 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:34.475 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:34.475 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:34.475 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:34.475 00:07:34.475 00:07:34.475 CUnit - A unit testing framework for C - Version 2.1-3 00:07:34.475 http://cunit.sourceforge.net/ 00:07:34.475 00:07:34.475 00:07:34.475 Suite: bdevio tests on: Nvme3n1 00:07:34.475 Test: blockdev write read block ...passed 00:07:34.475 Test: blockdev write zeroes read block ...passed 00:07:34.475 Test: blockdev write zeroes read no split ...passed 00:07:34.475 Test: blockdev write zeroes read split ...passed 00:07:34.475 Test: blockdev write zeroes read split partial ...passed 00:07:34.475 Test: blockdev reset ...[2024-11-18 06:41:27.381509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:34.475 [2024-11-18 06:41:27.386365] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:34.475 passed 00:07:34.475 Test: blockdev write read 8 blocks ...passed 00:07:34.475 Test: blockdev write read size > 128k ...passed 00:07:34.475 Test: blockdev write read invalid size ...passed 00:07:34.475 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:34.475 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:34.475 Test: blockdev write read max offset ...passed 00:07:34.475 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:34.475 Test: blockdev writev readv 8 blocks ...passed 00:07:34.475 Test: blockdev writev readv 30 x 1block ...passed 00:07:34.475 Test: blockdev writev readv block ...passed 00:07:34.475 Test: blockdev writev readv size > 128k ...passed 00:07:34.475 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:34.475 Test: blockdev comparev and writev ...[2024-11-18 06:41:27.403260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cbe0e000 len:0x1000 00:07:34.475 [2024-11-18 06:41:27.403317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:34.475 passed 00:07:34.475 Test: blockdev nvme passthru rw ...passed 00:07:34.475 Test: blockdev nvme passthru vendor specific ...[2024-11-18 06:41:27.405937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:34.475 [2024-11-18 06:41:27.405994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:34.475 passed 00:07:34.475 Test: blockdev nvme admin passthru ...passed 00:07:34.475 Test: blockdev copy ...passed 00:07:34.475 Suite: bdevio tests on: Nvme2n3 00:07:34.475 Test: blockdev write read block ...passed 00:07:34.475 Test: blockdev write zeroes read block ...passed 00:07:34.475 Test: blockdev write zeroes read no split ...passed 00:07:34.475 Test: blockdev write zeroes read split ...passed 00:07:34.475 Test: blockdev write zeroes read split partial ...passed 00:07:34.475 Test: blockdev reset ...[2024-11-18 06:41:27.434399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:34.475 [2024-11-18 06:41:27.438444] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:34.475 passed 00:07:34.475 Test: blockdev write read 8 blocks ...passed 00:07:34.475 Test: blockdev write read size > 128k ...passed 00:07:34.475 Test: blockdev write read invalid size ...passed 00:07:34.475 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:34.475 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:34.475 Test: blockdev write read max offset ...passed 00:07:34.475 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:34.475 Test: blockdev writev readv 8 blocks ...passed 00:07:34.475 Test: blockdev writev readv 30 x 1block ...passed 00:07:34.475 Test: blockdev writev readv block ...passed 00:07:34.475 Test: blockdev writev readv size > 128k ...passed 00:07:34.475 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:34.475 Test: blockdev comparev and writev ...[2024-11-18 06:41:27.455199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cbe0a000 len:0x1000 00:07:34.475 [2024-11-18 06:41:27.455263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:34.475 passed 00:07:34.475 Test: blockdev nvme passthru rw ...passed 00:07:34.475 Test: blockdev nvme passthru vendor specific ...[2024-11-18 06:41:27.457732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:34.475 [2024-11-18 06:41:27.457775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:34.475 passed 00:07:34.475 Test: blockdev nvme admin passthru ...passed 00:07:34.475 Test: blockdev copy ...passed 00:07:34.475 Suite: bdevio tests on: Nvme2n2 00:07:34.475 Test: blockdev write read block ...passed 00:07:34.475 Test: blockdev write zeroes read block ...passed 00:07:34.475 Test: blockdev write zeroes read no split ...passed 00:07:34.475 Test: blockdev write zeroes read split ...passed 00:07:34.475 Test: blockdev write zeroes read split partial ...passed 00:07:34.475 Test: blockdev reset ...[2024-11-18 06:41:27.487271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:34.475 [2024-11-18 06:41:27.491255] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:34.475 passed 00:07:34.475 Test: blockdev write read 8 blocks ...passed 00:07:34.476 Test: blockdev write read size > 128k ...passed 00:07:34.476 Test: blockdev write read invalid size ...passed 00:07:34.476 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:34.476 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:34.476 Test: blockdev write read max offset ...passed 00:07:34.476 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:34.476 Test: blockdev writev readv 8 blocks ...passed 00:07:34.476 Test: blockdev writev readv 30 x 1block ...passed 00:07:34.476 Test: blockdev writev readv block ...passed 00:07:34.476 Test: blockdev writev readv size > 128k ...passed 00:07:34.476 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:34.476 Test: blockdev comparev and writev ...[2024-11-18 06:41:27.508973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b5a05000 len:0x1000 00:07:34.476 [2024-11-18 06:41:27.509050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:34.476 passed 00:07:34.476 Test: blockdev nvme passthru rw ...passed 00:07:34.476 Test: blockdev nvme passthru vendor specific ...[2024-11-18 06:41:27.512086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:34.476 [2024-11-18 06:41:27.512141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:34.476 passed 00:07:34.476 Test: blockdev nvme admin passthru ...passed 00:07:34.476 Test: blockdev copy ...passed 00:07:34.476 Suite: bdevio tests on: Nvme2n1 00:07:34.476 Test: blockdev write read block ...passed 00:07:34.476 Test: blockdev write zeroes read block ...passed 00:07:34.476 Test: blockdev write zeroes read no split ...passed 00:07:34.476 Test: blockdev write zeroes read split ...passed 00:07:34.476 Test: blockdev write zeroes read split partial ...passed 00:07:34.476 Test: blockdev reset ...[2024-11-18 06:41:27.539598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:34.476 [2024-11-18 06:41:27.543103] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:34.476 passed 00:07:34.476 Test: blockdev write read 8 blocks ...passed 00:07:34.476 Test: blockdev write read size > 128k ...passed 00:07:34.476 Test: blockdev write read invalid size ...passed 00:07:34.476 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:34.476 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:34.476 Test: blockdev write read max offset ...passed 00:07:34.476 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:34.476 Test: blockdev writev readv 8 blocks ...passed 00:07:34.476 Test: blockdev writev readv 30 x 1block ...passed 00:07:34.476 Test: blockdev writev readv block ...passed 00:07:34.476 Test: blockdev writev readv size > 128k ...passed 00:07:34.476 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:34.476 Test: blockdev comparev and writev ...[2024-11-18 06:41:27.559593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cc202000 len:0x1000 00:07:34.476 [2024-11-18 06:41:27.559672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:34.737 passed 00:07:34.737 Test: blockdev nvme passthru rw ...passed 00:07:34.737 Test: blockdev nvme passthru vendor specific ...[2024-11-18 06:41:27.562229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:34.737 [2024-11-18 06:41:27.562271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:34.737 passed 00:07:34.737 Test: blockdev nvme admin passthru ...passed 00:07:34.737 Test: blockdev copy ...passed 00:07:34.737 Suite: bdevio tests on: Nvme1n1p2 00:07:34.737 Test: blockdev write read block ...passed 00:07:34.737 Test: blockdev write zeroes read block ...passed 00:07:34.737 Test: blockdev write zeroes read no split ...passed 00:07:34.737 Test: blockdev write zeroes read split ...passed 00:07:34.737 Test: blockdev write zeroes read split partial ...passed 00:07:34.737 Test: blockdev reset ...[2024-11-18 06:41:27.592640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:34.737 [2024-11-18 06:41:27.595756] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:34.737 passed 00:07:34.737 Test: blockdev write read 8 blocks ...passed 00:07:34.737 Test: blockdev write read size > 128k ...passed 00:07:34.737 Test: blockdev write read invalid size ...passed 00:07:34.737 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:34.737 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:34.737 Test: blockdev write read max offset ...passed 00:07:34.737 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:34.737 Test: blockdev writev readv 8 blocks ...passed 00:07:34.737 Test: blockdev writev readv 30 x 1block ...passed 00:07:34.737 Test: blockdev writev readv block ...passed 00:07:34.737 Test: blockdev writev readv size > 128k ...passed 00:07:34.737 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:34.737 Test: blockdev comparev and writev ...[2024-11-18 06:41:27.614923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2e7a3b000 len:0x1000 00:07:34.737 [2024-11-18 06:41:27.615077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:34.737 passed 00:07:34.737 Test: blockdev nvme passthru rw ...passed 00:07:34.737 Test: blockdev nvme passthru vendor specific ...passed 00:07:34.737 Test: blockdev nvme admin passthru ...passed 00:07:34.737 Test: blockdev copy ...passed 00:07:34.737 Suite: bdevio tests on: Nvme1n1p1 00:07:34.737 Test: blockdev write read block ...passed 00:07:34.737 Test: blockdev write zeroes read block ...passed 00:07:34.737 Test: blockdev write zeroes read no split ...passed 00:07:34.737 Test: blockdev write zeroes read split ...passed 00:07:34.737 Test: blockdev write zeroes read split partial ...passed 00:07:34.737 Test: blockdev reset ...[2024-11-18 06:41:27.639572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:34.737 [2024-11-18 06:41:27.642344] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:34.737 passed 00:07:34.737 Test: blockdev write read 8 blocks ...passed 00:07:34.737 Test: blockdev write read size > 128k ...passed 00:07:34.737 Test: blockdev write read invalid size ...passed 00:07:34.737 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:34.737 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:34.737 Test: blockdev write read max offset ...passed 00:07:34.737 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:34.737 Test: blockdev writev readv 8 blocks ...passed 00:07:34.737 Test: blockdev writev readv 30 x 1block ...passed 00:07:34.737 Test: blockdev writev readv block ...passed 00:07:34.737 Test: blockdev writev readv size > 128k ...passed 00:07:34.737 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:34.737 Test: blockdev comparev and writev ...[2024-11-18 06:41:27.659681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2e7a37000 len:0x1000 00:07:34.737 [2024-11-18 06:41:27.659739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:34.737 passed 00:07:34.737 Test: blockdev nvme passthru rw ...passed 00:07:34.737 Test: blockdev nvme passthru vendor specific ...passed 00:07:34.737 Test: blockdev nvme admin passthru ...passed 00:07:34.737 Test: blockdev copy ...passed 00:07:34.737 Suite: bdevio tests on: Nvme0n1 00:07:34.737 Test: blockdev write read block ...passed 00:07:34.737 Test: blockdev write zeroes read block ...passed 00:07:34.737 Test: blockdev write zeroes read no split ...passed 00:07:34.737 Test: blockdev write zeroes read split ...passed 00:07:34.737 Test: blockdev write zeroes read split partial ...passed 00:07:34.737 Test: blockdev reset ...[2024-11-18 06:41:27.683745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:34.737 [2024-11-18 06:41:27.686617] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:07:34.737 Test: blockdev write read 8 blocks ...uccessful. 00:07:34.737 passed 00:07:34.737 Test: blockdev write read size > 128k ...passed 00:07:34.737 Test: blockdev write read invalid size ...passed 00:07:34.737 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:34.738 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:34.738 Test: blockdev write read max offset ...passed 00:07:34.738 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:34.738 Test: blockdev writev readv 8 blocks ...passed 00:07:34.738 Test: blockdev writev readv 30 x 1block ...passed 00:07:34.738 Test: blockdev writev readv block ...passed 00:07:34.738 Test: blockdev writev readv size > 128k ...passed 00:07:34.738 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:34.738 Test: blockdev comparev and writev ...[2024-11-18 06:41:27.701297] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:34.738 separate metadata which is not supported yet. 00:07:34.738 passed 00:07:34.738 Test: blockdev nvme passthru rw ...passed 00:07:34.738 Test: blockdev nvme passthru vendor specific ...passed 00:07:34.738 Test: blockdev nvme admin passthru ...[2024-11-18 06:41:27.703718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:34.738 [2024-11-18 06:41:27.703853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:34.738 passed 00:07:34.738 Test: blockdev copy ...passed 00:07:34.738 00:07:34.738 Run Summary: Type Total Ran Passed Failed Inactive 00:07:34.738 suites 7 7 n/a 0 0 00:07:34.738 tests 161 161 161 0 0 00:07:34.738 asserts 1025 1025 1025 0 n/a 00:07:34.738 00:07:34.738 Elapsed time = 0.775 seconds 00:07:34.738 0 00:07:34.738 06:41:27 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73269 00:07:34.738 06:41:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 73269 ']' 00:07:34.738 06:41:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 73269 00:07:34.738 06:41:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:34.738 06:41:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.738 06:41:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73269 00:07:34.738 06:41:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:34.738 06:41:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:34.738 06:41:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73269' 00:07:34.738 killing process with pid 73269 00:07:34.738 06:41:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 73269 00:07:34.738 06:41:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 73269 00:07:34.999 06:41:27 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:34.999 00:07:34.999 real 0m1.541s 00:07:34.999 user 0m3.867s 00:07:34.999 sys 0m0.290s 00:07:34.999 06:41:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.999 06:41:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:34.999 ************************************ 00:07:34.999 END TEST bdev_bounds 00:07:34.999 ************************************ 00:07:34.999 06:41:27 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:34.999 06:41:27 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:34.999 06:41:27 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.999 06:41:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:34.999 ************************************ 00:07:34.999 START TEST bdev_nbd 00:07:34.999 ************************************ 00:07:34.999 06:41:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:34.999 06:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:34.999 06:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:34.999 06:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.999 06:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:34.999 06:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:34.999 06:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:34.999 06:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:34.999 06:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:34.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:34.999 06:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:34.999 06:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:34.999 06:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:34.999 06:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:34.999 06:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:34.999 06:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:34.999 06:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:34.999 06:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73312 00:07:34.999 06:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:34.999 06:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73312 /var/tmp/spdk-nbd.sock 00:07:34.999 06:41:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 73312 ']' 00:07:34.999 06:41:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:34.999 06:41:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.999 06:41:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:34.999 06:41:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:34.999 06:41:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.999 06:41:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:34.999 [2024-11-18 06:41:28.034901] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:34.999 [2024-11-18 06:41:28.035175] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.261 [2024-11-18 06:41:28.189971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.261 [2024-11-18 06:41:28.209800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.833 06:41:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.833 06:41:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:35.833 06:41:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:35.833 06:41:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.833 06:41:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:35.833 06:41:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:35.833 06:41:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:35.834 06:41:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.834 06:41:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:35.834 06:41:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:35.834 06:41:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:35.834 06:41:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:35.834 06:41:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:35.834 06:41:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:35.834 06:41:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:36.094 06:41:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:36.094 06:41:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:36.094 06:41:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:36.094 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:36.094 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:36.094 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:36.094 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:36.094 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:36.094 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:36.094 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:36.094 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:36.094 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:36.094 1+0 records in 00:07:36.094 1+0 records out 00:07:36.094 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00128147 s, 3.2 MB/s 00:07:36.094 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.094 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:36.094 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.094 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:36.094 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:36.094 06:41:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:36.094 06:41:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:36.094 06:41:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:36.354 06:41:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:36.354 06:41:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:36.354 06:41:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:36.354 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:36.354 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:36.354 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:36.354 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:36.354 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:36.355 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:36.355 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:36.355 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:36.355 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:36.355 1+0 records in 00:07:36.355 1+0 records out 00:07:36.355 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000893994 s, 4.6 MB/s 00:07:36.355 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.355 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:36.355 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.355 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:36.355 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:36.355 06:41:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:36.355 06:41:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:36.355 06:41:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:36.615 06:41:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:36.615 06:41:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:36.615 06:41:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:36.615 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:36.615 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:36.615 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:36.615 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:36.615 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:36.615 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:36.615 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:36.615 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:36.615 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:36.615 1+0 records in 00:07:36.615 1+0 records out 00:07:36.615 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419676 s, 9.8 MB/s 00:07:36.616 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.616 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:36.616 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.616 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:36.616 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:36.616 06:41:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:36.616 06:41:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:36.616 06:41:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:36.875 06:41:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:36.875 06:41:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:36.875 06:41:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:36.875 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:36.875 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:36.875 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:36.875 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:36.875 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:36.875 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:36.875 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:36.875 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:36.875 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:36.875 1+0 records in 00:07:36.875 1+0 records out 00:07:36.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00161346 s, 2.5 MB/s 00:07:36.875 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.875 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:36.875 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.875 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:36.875 06:41:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:36.875 06:41:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:36.875 06:41:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:36.875 06:41:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:37.135 06:41:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:37.135 06:41:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:37.135 06:41:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:37.135 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:37.135 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:37.135 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:37.135 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:37.135 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:37.135 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:37.135 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:37.135 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:37.135 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:37.135 1+0 records in 00:07:37.135 1+0 records out 00:07:37.135 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000476114 s, 8.6 MB/s 00:07:37.135 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:37.135 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:37.135 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:37.135 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:37.135 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:37.135 06:41:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:37.135 06:41:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:37.135 06:41:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:37.395 06:41:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:37.395 06:41:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:37.395 06:41:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:37.395 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:37.395 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:37.395 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:37.395 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:37.395 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:37.395 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:37.395 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:37.395 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:37.395 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:37.396 1+0 records in 00:07:37.396 1+0 records out 00:07:37.396 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108253 s, 3.8 MB/s 00:07:37.396 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:37.396 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:37.396 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:37.396 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:37.396 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:37.396 06:41:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:37.396 06:41:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:37.396 06:41:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:37.655 06:41:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:37.655 06:41:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:37.655 06:41:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:37.655 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:37.655 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:37.655 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:37.655 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:37.655 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:37.655 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:37.655 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:37.655 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:37.655 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:37.655 1+0 records in 00:07:37.655 1+0 records out 00:07:37.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000872762 s, 4.7 MB/s 00:07:37.655 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:37.655 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:37.655 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:37.655 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:37.655 06:41:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:37.655 06:41:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:37.655 06:41:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:37.655 06:41:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:37.916 06:41:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:37.916 { 00:07:37.916 "nbd_device": "/dev/nbd0", 00:07:37.916 "bdev_name": "Nvme0n1" 00:07:37.916 }, 00:07:37.916 { 00:07:37.916 "nbd_device": "/dev/nbd1", 00:07:37.916 "bdev_name": "Nvme1n1p1" 00:07:37.916 }, 00:07:37.916 { 00:07:37.916 "nbd_device": "/dev/nbd2", 00:07:37.916 "bdev_name": "Nvme1n1p2" 00:07:37.916 }, 00:07:37.916 { 00:07:37.916 "nbd_device": "/dev/nbd3", 00:07:37.916 "bdev_name": "Nvme2n1" 00:07:37.916 }, 00:07:37.916 { 00:07:37.916 "nbd_device": "/dev/nbd4", 00:07:37.916 "bdev_name": "Nvme2n2" 00:07:37.916 }, 00:07:37.916 { 00:07:37.916 "nbd_device": "/dev/nbd5", 00:07:37.916 "bdev_name": "Nvme2n3" 00:07:37.916 }, 00:07:37.916 { 00:07:37.916 "nbd_device": "/dev/nbd6", 00:07:37.916 "bdev_name": "Nvme3n1" 00:07:37.916 } 00:07:37.916 ]' 00:07:37.916 06:41:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:37.916 06:41:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:37.916 06:41:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:37.916 { 00:07:37.916 "nbd_device": "/dev/nbd0", 00:07:37.916 "bdev_name": "Nvme0n1" 00:07:37.916 }, 00:07:37.916 { 00:07:37.916 "nbd_device": "/dev/nbd1", 00:07:37.916 "bdev_name": "Nvme1n1p1" 00:07:37.916 }, 00:07:37.916 { 00:07:37.916 "nbd_device": "/dev/nbd2", 00:07:37.916 "bdev_name": "Nvme1n1p2" 00:07:37.916 }, 00:07:37.916 { 00:07:37.916 "nbd_device": "/dev/nbd3", 00:07:37.916 "bdev_name": "Nvme2n1" 00:07:37.916 }, 00:07:37.916 { 00:07:37.916 "nbd_device": "/dev/nbd4", 00:07:37.916 "bdev_name": "Nvme2n2" 00:07:37.916 }, 00:07:37.916 { 00:07:37.916 "nbd_device": "/dev/nbd5", 00:07:37.916 "bdev_name": "Nvme2n3" 00:07:37.916 }, 00:07:37.916 { 00:07:37.916 "nbd_device": "/dev/nbd6", 00:07:37.916 "bdev_name": "Nvme3n1" 00:07:37.916 } 00:07:37.916 ]' 00:07:37.916 06:41:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:37.916 06:41:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.916 06:41:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:37.916 06:41:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:37.916 06:41:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:37.916 06:41:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.916 06:41:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:38.177 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:38.177 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:38.177 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:38.177 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:38.177 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:38.177 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:38.177 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:38.177 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:38.177 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:38.177 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:38.439 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:38.439 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:38.439 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:38.439 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:38.439 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:38.439 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:38.439 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:38.439 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:38.439 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:38.439 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:38.701 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:38.702 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:38.702 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:38.702 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:38.702 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:38.702 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:38.702 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:38.702 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:38.702 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:38.702 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:38.702 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:38.964 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:38.964 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:38.964 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:38.964 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:38.964 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:38.964 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:38.964 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:38.964 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:38.964 06:41:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:38.964 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:38.964 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:38.964 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:38.964 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:38.964 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:38.964 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:38.964 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:38.964 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:38.964 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:38.964 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:39.224 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:39.224 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:39.224 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:39.224 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:39.224 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:39.224 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:39.224 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:39.224 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:39.224 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:39.224 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:39.486 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:39.486 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:39.486 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:39.486 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:39.486 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:39.486 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:39.486 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:39.486 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:39.486 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:39.486 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.486 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:39.747 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:39.747 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:39.747 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:39.747 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:39.747 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:39.747 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:39.747 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:39.747 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:39.747 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:39.747 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:39.747 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:39.747 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:39.747 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:39.747 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.747 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:39.747 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:39.747 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:39.747 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:39.747 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:39.747 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.747 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:39.747 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:39.747 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:39.747 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:39.747 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:39.747 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:39.747 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:39.747 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:40.008 /dev/nbd0 00:07:40.008 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:40.008 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:40.008 06:41:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:40.008 06:41:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:40.008 06:41:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:40.008 06:41:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:40.008 06:41:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:40.008 06:41:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:40.008 06:41:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:40.008 06:41:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:40.008 06:41:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:40.008 1+0 records in 00:07:40.008 1+0 records out 00:07:40.008 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103168 s, 4.0 MB/s 00:07:40.008 06:41:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.008 06:41:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:40.008 06:41:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.008 06:41:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:40.008 06:41:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:40.008 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:40.008 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:40.008 06:41:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:40.270 /dev/nbd1 00:07:40.270 06:41:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:40.270 06:41:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:40.270 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:40.270 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:40.270 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:40.270 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:40.270 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:40.270 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:40.270 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:40.270 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:40.270 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:40.270 1+0 records in 00:07:40.270 1+0 records out 00:07:40.270 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000990699 s, 4.1 MB/s 00:07:40.270 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.270 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:40.270 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.270 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:40.270 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:40.270 06:41:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:40.270 06:41:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:40.270 06:41:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:40.531 /dev/nbd10 00:07:40.531 06:41:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:40.531 06:41:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:40.531 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:40.531 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:40.531 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:40.532 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:40.532 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:40.532 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:40.532 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:40.532 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:40.532 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:40.532 1+0 records in 00:07:40.532 1+0 records out 00:07:40.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115187 s, 3.6 MB/s 00:07:40.532 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.532 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:40.532 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.532 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:40.532 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:40.532 06:41:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:40.532 06:41:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:40.532 06:41:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:40.852 /dev/nbd11 00:07:40.852 06:41:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:40.852 06:41:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:40.852 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:40.852 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:40.852 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:40.852 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:40.852 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:40.852 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:40.852 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:40.852 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:40.852 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:40.852 1+0 records in 00:07:40.852 1+0 records out 00:07:40.852 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105438 s, 3.9 MB/s 00:07:40.852 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.852 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:40.852 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.852 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:40.852 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:40.852 06:41:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:40.852 06:41:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:40.852 06:41:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:40.852 /dev/nbd12 00:07:41.116 06:41:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:41.116 06:41:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:41.116 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:41.116 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:41.116 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:41.116 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:41.116 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:41.116 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:41.116 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:41.116 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:41.116 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:41.116 1+0 records in 00:07:41.116 1+0 records out 00:07:41.116 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115267 s, 3.6 MB/s 00:07:41.116 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.116 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:41.116 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.116 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:41.116 06:41:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:41.116 06:41:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:41.116 06:41:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:41.116 06:41:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:41.116 /dev/nbd13 00:07:41.116 06:41:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:41.116 06:41:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:41.116 06:41:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:41.116 06:41:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:41.116 06:41:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:41.116 06:41:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:41.116 06:41:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:41.116 06:41:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:41.116 06:41:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:41.116 06:41:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:41.116 06:41:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:41.116 1+0 records in 00:07:41.116 1+0 records out 00:07:41.116 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00085533 s, 4.8 MB/s 00:07:41.116 06:41:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.116 06:41:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:41.116 06:41:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.116 06:41:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:41.116 06:41:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:41.116 06:41:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:41.116 06:41:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:41.116 06:41:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:41.377 /dev/nbd14 00:07:41.377 06:41:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:41.377 06:41:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:41.377 06:41:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:41.377 06:41:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:41.377 06:41:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:41.377 06:41:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:41.377 06:41:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:41.377 06:41:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:41.377 06:41:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:41.378 06:41:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:41.378 06:41:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:41.378 1+0 records in 00:07:41.378 1+0 records out 00:07:41.378 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118279 s, 3.5 MB/s 00:07:41.378 06:41:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.378 06:41:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:41.378 06:41:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.378 06:41:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:41.378 06:41:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:41.378 06:41:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:41.378 06:41:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:41.378 06:41:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:41.378 06:41:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.378 06:41:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:41.640 06:41:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:41.640 { 00:07:41.640 "nbd_device": "/dev/nbd0", 00:07:41.640 "bdev_name": "Nvme0n1" 00:07:41.640 }, 00:07:41.640 { 00:07:41.640 "nbd_device": "/dev/nbd1", 00:07:41.640 "bdev_name": "Nvme1n1p1" 00:07:41.640 }, 00:07:41.640 { 00:07:41.640 "nbd_device": "/dev/nbd10", 00:07:41.640 "bdev_name": "Nvme1n1p2" 00:07:41.640 }, 00:07:41.640 { 00:07:41.640 "nbd_device": "/dev/nbd11", 00:07:41.640 "bdev_name": "Nvme2n1" 00:07:41.640 }, 00:07:41.640 { 00:07:41.640 "nbd_device": "/dev/nbd12", 00:07:41.640 "bdev_name": "Nvme2n2" 00:07:41.640 }, 00:07:41.640 { 00:07:41.640 "nbd_device": "/dev/nbd13", 00:07:41.640 "bdev_name": "Nvme2n3" 00:07:41.640 }, 00:07:41.640 { 00:07:41.640 "nbd_device": "/dev/nbd14", 00:07:41.640 "bdev_name": "Nvme3n1" 00:07:41.640 } 00:07:41.640 ]' 00:07:41.640 06:41:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:41.640 { 00:07:41.640 "nbd_device": "/dev/nbd0", 00:07:41.640 "bdev_name": "Nvme0n1" 00:07:41.640 }, 00:07:41.640 { 00:07:41.640 "nbd_device": "/dev/nbd1", 00:07:41.640 "bdev_name": "Nvme1n1p1" 00:07:41.640 }, 00:07:41.640 { 00:07:41.640 "nbd_device": "/dev/nbd10", 00:07:41.640 "bdev_name": "Nvme1n1p2" 00:07:41.640 }, 00:07:41.640 { 00:07:41.640 "nbd_device": "/dev/nbd11", 00:07:41.640 "bdev_name": "Nvme2n1" 00:07:41.640 }, 00:07:41.640 { 00:07:41.640 "nbd_device": "/dev/nbd12", 00:07:41.640 "bdev_name": "Nvme2n2" 00:07:41.640 }, 00:07:41.640 { 00:07:41.640 "nbd_device": "/dev/nbd13", 00:07:41.640 "bdev_name": "Nvme2n3" 00:07:41.640 }, 00:07:41.640 { 00:07:41.640 "nbd_device": "/dev/nbd14", 00:07:41.640 "bdev_name": "Nvme3n1" 00:07:41.640 } 00:07:41.640 ]' 00:07:41.640 06:41:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:41.640 06:41:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:41.640 /dev/nbd1 00:07:41.640 /dev/nbd10 00:07:41.640 /dev/nbd11 00:07:41.640 /dev/nbd12 00:07:41.640 /dev/nbd13 00:07:41.640 /dev/nbd14' 00:07:41.640 06:41:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:41.640 06:41:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:41.640 /dev/nbd1 00:07:41.640 /dev/nbd10 00:07:41.640 /dev/nbd11 00:07:41.640 /dev/nbd12 00:07:41.640 /dev/nbd13 00:07:41.640 /dev/nbd14' 00:07:41.640 06:41:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:41.640 06:41:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:41.640 06:41:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:41.640 06:41:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:41.640 06:41:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:41.640 06:41:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:41.640 06:41:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:41.640 06:41:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:41.640 06:41:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:41.640 06:41:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:41.640 06:41:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:41.640 256+0 records in 00:07:41.640 256+0 records out 00:07:41.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00618221 s, 170 MB/s 00:07:41.640 06:41:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:41.640 06:41:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:41.902 256+0 records in 00:07:41.902 256+0 records out 00:07:41.902 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.23828 s, 4.4 MB/s 00:07:41.902 06:41:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:41.902 06:41:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:42.163 256+0 records in 00:07:42.163 256+0 records out 00:07:42.163 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.209179 s, 5.0 MB/s 00:07:42.163 06:41:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:42.163 06:41:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:42.425 256+0 records in 00:07:42.425 256+0 records out 00:07:42.425 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.241441 s, 4.3 MB/s 00:07:42.425 06:41:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:42.425 06:41:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:42.688 256+0 records in 00:07:42.688 256+0 records out 00:07:42.688 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.249745 s, 4.2 MB/s 00:07:42.688 06:41:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:42.688 06:41:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:42.950 256+0 records in 00:07:42.950 256+0 records out 00:07:42.950 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.205085 s, 5.1 MB/s 00:07:42.950 06:41:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:42.950 06:41:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:43.211 256+0 records in 00:07:43.211 256+0 records out 00:07:43.211 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.240032 s, 4.4 MB/s 00:07:43.211 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:43.211 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:43.474 256+0 records in 00:07:43.474 256+0 records out 00:07:43.474 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.240261 s, 4.4 MB/s 00:07:43.474 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:43.474 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:43.474 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:43.474 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:43.474 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:43.474 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:43.474 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:43.474 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:43.474 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:43.474 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:43.474 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:43.474 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:43.474 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:43.474 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:43.474 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:43.474 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:43.474 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:43.474 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:43.474 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:43.474 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:43.474 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:43.474 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:43.474 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:43.474 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.474 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:43.474 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:43.474 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:43.474 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.474 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:43.736 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:43.736 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:43.736 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:43.736 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.736 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.736 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:43.736 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:43.736 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.736 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.736 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:43.998 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:43.998 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:43.998 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:43.998 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.998 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.998 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:43.998 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:43.998 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.998 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.998 06:41:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:44.260 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:44.260 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:44.260 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:44.260 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:44.260 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:44.260 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:44.260 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:44.260 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:44.260 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:44.261 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:44.261 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:44.261 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:44.261 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:44.261 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:44.261 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:44.261 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:44.261 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:44.261 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:44.261 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:44.261 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:44.522 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:44.522 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:44.522 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:44.522 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:44.522 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:44.522 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:44.522 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:44.522 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:44.522 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:44.523 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:44.785 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:44.785 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:44.785 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:44.785 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:44.785 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:44.785 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:44.785 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:44.785 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:44.785 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:44.785 06:41:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:45.047 06:41:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:45.047 06:41:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:45.047 06:41:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:45.047 06:41:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:45.047 06:41:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:45.047 06:41:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:45.047 06:41:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:45.047 06:41:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:45.047 06:41:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:45.047 06:41:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.047 06:41:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:45.309 06:41:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:45.309 06:41:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:45.309 06:41:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:45.309 06:41:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:45.309 06:41:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:45.309 06:41:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:45.309 06:41:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:45.309 06:41:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:45.309 06:41:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:45.309 06:41:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:45.309 06:41:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:45.309 06:41:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:45.309 06:41:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:45.309 06:41:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.309 06:41:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:45.309 06:41:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:45.570 malloc_lvol_verify 00:07:45.570 06:41:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:45.831 fdb080f7-7496-4ade-9030-627dba1bec79 00:07:45.831 06:41:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:45.831 a7df20a9-ba9e-4c58-bfef-e53e9327bdde 00:07:45.831 06:41:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:46.092 /dev/nbd0 00:07:46.092 06:41:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:46.092 06:41:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:46.092 06:41:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:46.092 06:41:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:46.092 06:41:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:46.092 mke2fs 1.47.0 (5-Feb-2023) 00:07:46.092 Discarding device blocks: 0/4096 done 00:07:46.092 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:46.092 00:07:46.092 Allocating group tables: 0/1 done 00:07:46.092 Writing inode tables: 0/1 done 00:07:46.092 Creating journal (1024 blocks): done 00:07:46.092 Writing superblocks and filesystem accounting information: 0/1 done 00:07:46.092 00:07:46.092 06:41:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:46.092 06:41:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.092 06:41:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:46.092 06:41:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:46.092 06:41:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:46.092 06:41:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.092 06:41:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:46.351 06:41:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:46.351 06:41:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:46.351 06:41:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:46.351 06:41:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.351 06:41:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.351 06:41:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:46.351 06:41:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.351 06:41:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.351 06:41:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73312 00:07:46.351 06:41:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 73312 ']' 00:07:46.351 06:41:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 73312 00:07:46.351 06:41:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:46.351 06:41:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.351 06:41:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73312 00:07:46.351 killing process with pid 73312 00:07:46.351 06:41:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:46.351 06:41:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:46.351 06:41:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73312' 00:07:46.351 06:41:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 73312 00:07:46.351 06:41:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 73312 00:07:46.613 ************************************ 00:07:46.613 END TEST bdev_nbd 00:07:46.613 ************************************ 00:07:46.613 06:41:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:46.613 00:07:46.613 real 0m11.537s 00:07:46.613 user 0m15.962s 00:07:46.613 sys 0m4.020s 00:07:46.613 06:41:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.613 06:41:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:46.613 06:41:39 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:46.613 06:41:39 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:46.613 skipping fio tests on NVMe due to multi-ns failures. 00:07:46.613 06:41:39 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:46.613 06:41:39 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:46.613 06:41:39 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:46.613 06:41:39 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:46.613 06:41:39 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:46.613 06:41:39 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.613 06:41:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:46.613 ************************************ 00:07:46.613 START TEST bdev_verify 00:07:46.613 ************************************ 00:07:46.613 06:41:39 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:46.613 [2024-11-18 06:41:39.613882] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:46.613 [2024-11-18 06:41:39.613987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73733 ] 00:07:46.875 [2024-11-18 06:41:39.761540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:46.875 [2024-11-18 06:41:39.779584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.875 [2024-11-18 06:41:39.779626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.137 Running I/O for 5 seconds... 00:07:49.468 19904.00 IOPS, 77.75 MiB/s [2024-11-18T06:41:43.501Z] 20128.00 IOPS, 78.62 MiB/s [2024-11-18T06:41:44.437Z] 20224.00 IOPS, 79.00 MiB/s [2024-11-18T06:41:45.370Z] 20288.00 IOPS, 79.25 MiB/s [2024-11-18T06:41:45.370Z] 20249.60 IOPS, 79.10 MiB/s 00:07:52.283 Latency(us) 00:07:52.283 [2024-11-18T06:41:45.370Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:52.283 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:52.283 Verification LBA range: start 0x0 length 0xbd0bd 00:07:52.283 Nvme0n1 : 5.07 1400.94 5.47 0.00 0.00 90858.68 8519.68 75416.81 00:07:52.283 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:52.283 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:52.283 Nvme0n1 : 5.04 1447.58 5.65 0.00 0.00 88043.84 17140.18 83886.08 00:07:52.283 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:52.283 Verification LBA range: start 0x0 length 0x4ff80 00:07:52.283 Nvme1n1p1 : 5.09 1407.48 5.50 0.00 0.00 90609.05 16736.89 72593.72 00:07:52.283 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:52.283 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:52.283 Nvme1n1p1 : 5.10 1456.73 5.69 0.00 0.00 87376.63 13712.15 73803.62 00:07:52.283 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:52.283 Verification LBA range: start 0x0 length 0x4ff7f 00:07:52.283 Nvme1n1p2 : 5.09 1407.04 5.50 0.00 0.00 90567.66 16031.11 71787.13 00:07:52.283 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:52.283 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:52.283 Nvme1n1p2 : 5.10 1456.30 5.69 0.00 0.00 87183.43 13107.20 72190.42 00:07:52.283 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:52.283 Verification LBA range: start 0x0 length 0x80000 00:07:52.283 Nvme2n1 : 5.10 1406.21 5.49 0.00 0.00 90435.88 17543.48 66140.95 00:07:52.283 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:52.283 Verification LBA range: start 0x80000 length 0x80000 00:07:52.283 Nvme2n1 : 5.10 1455.10 5.68 0.00 0.00 87058.87 15426.17 70577.23 00:07:52.283 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:52.283 Verification LBA range: start 0x0 length 0x80000 00:07:52.283 Nvme2n2 : 5.10 1405.51 5.49 0.00 0.00 90269.58 17946.78 67350.84 00:07:52.283 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:52.283 Verification LBA range: start 0x80000 length 0x80000 00:07:52.283 Nvme2n2 : 5.10 1454.37 5.68 0.00 0.00 86923.96 17039.36 70173.93 00:07:52.283 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:52.283 Verification LBA range: start 0x0 length 0x80000 00:07:52.283 Nvme2n3 : 5.10 1405.15 5.49 0.00 0.00 90082.10 16535.24 70980.53 00:07:52.283 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:52.283 Verification LBA range: start 0x80000 length 0x80000 00:07:52.283 Nvme2n3 : 5.11 1453.98 5.68 0.00 0.00 86792.27 12905.55 72997.02 00:07:52.283 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:52.283 Verification LBA range: start 0x0 length 0x20000 00:07:52.283 Nvme3n1 : 5.10 1404.75 5.49 0.00 0.00 89924.42 8469.27 74610.22 00:07:52.283 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:52.283 Verification LBA range: start 0x20000 length 0x20000 00:07:52.283 Nvme3n1 : 5.11 1453.59 5.68 0.00 0.00 86719.00 9931.22 75416.81 00:07:52.283 [2024-11-18T06:41:45.370Z] =================================================================================================================== 00:07:52.283 [2024-11-18T06:41:45.370Z] Total : 20014.72 78.18 0.00 0.00 88745.84 8469.27 83886.08 00:07:53.223 00:07:53.223 real 0m6.373s 00:07:53.223 user 0m12.032s 00:07:53.223 sys 0m0.200s 00:07:53.223 06:41:45 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.223 ************************************ 00:07:53.223 END TEST bdev_verify 00:07:53.223 ************************************ 00:07:53.223 06:41:45 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:53.223 06:41:45 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:53.223 06:41:45 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:53.223 06:41:45 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.223 06:41:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:53.223 ************************************ 00:07:53.223 START TEST bdev_verify_big_io 00:07:53.223 ************************************ 00:07:53.223 06:41:46 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:53.223 [2024-11-18 06:41:46.066370] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:53.223 [2024-11-18 06:41:46.066494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73820 ] 00:07:53.223 [2024-11-18 06:41:46.227085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:53.223 [2024-11-18 06:41:46.257392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.223 [2024-11-18 06:41:46.257486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.796 Running I/O for 5 seconds... 00:07:58.265 257.00 IOPS, 16.06 MiB/s [2024-11-18T06:41:52.296Z] 1602.00 IOPS, 100.12 MiB/s [2024-11-18T06:41:52.867Z] 2242.67 IOPS, 140.17 MiB/s [2024-11-18T06:41:53.129Z] 2556.75 IOPS, 159.80 MiB/s 00:08:00.042 Latency(us) 00:08:00.042 [2024-11-18T06:41:53.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:00.042 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:00.042 Verification LBA range: start 0x0 length 0xbd0b 00:08:00.042 Nvme0n1 : 5.70 110.15 6.88 0.00 0.00 1094072.87 25609.45 1264743.98 00:08:00.042 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:00.042 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:00.042 Nvme0n1 : 5.72 103.56 6.47 0.00 0.00 1162846.05 36095.21 1780966.01 00:08:00.042 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:00.042 Verification LBA range: start 0x0 length 0x4ff8 00:08:00.042 Nvme1n1p1 : 5.70 117.30 7.33 0.00 0.00 1014832.76 101631.21 1071160.71 00:08:00.042 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:00.042 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:00.042 Nvme1n1p1 : 5.81 108.67 6.79 0.00 0.00 1088281.72 97598.23 1568024.42 00:08:00.042 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:00.042 Verification LBA range: start 0x0 length 0x4ff7 00:08:00.042 Nvme1n1p2 : 5.78 118.98 7.44 0.00 0.00 976571.10 77030.01 1238932.87 00:08:00.042 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:00.042 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:00.042 Nvme1n1p2 : 5.86 120.66 7.54 0.00 0.00 958567.71 87919.06 1058255.16 00:08:00.042 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:00.042 Verification LBA range: start 0x0 length 0x8000 00:08:00.042 Nvme2n1 : 5.94 116.40 7.27 0.00 0.00 958337.53 79046.50 1884210.41 00:08:00.042 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:00.042 Verification LBA range: start 0x8000 length 0x8000 00:08:00.042 Nvme2n1 : 5.93 125.78 7.86 0.00 0.00 895774.00 51622.20 1038896.84 00:08:00.042 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:00.042 Verification LBA range: start 0x0 length 0x8000 00:08:00.043 Nvme2n2 : 6.03 123.83 7.74 0.00 0.00 877982.37 92758.65 1910021.51 00:08:00.043 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:00.043 Verification LBA range: start 0x8000 length 0x8000 00:08:00.043 Nvme2n2 : 5.93 129.45 8.09 0.00 0.00 848535.76 69367.34 1051802.39 00:08:00.043 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:00.043 Verification LBA range: start 0x0 length 0x8000 00:08:00.043 Nvme2n3 : 6.09 134.04 8.38 0.00 0.00 791230.61 19660.80 1910021.51 00:08:00.043 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:00.043 Verification LBA range: start 0x8000 length 0x8000 00:08:00.043 Nvme2n3 : 6.07 142.97 8.94 0.00 0.00 747244.16 18350.08 1071160.71 00:08:00.043 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:00.043 Verification LBA range: start 0x0 length 0x2000 00:08:00.043 Nvme3n1 : 6.12 159.12 9.94 0.00 0.00 647569.44 718.38 1961643.72 00:08:00.043 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:00.043 Verification LBA range: start 0x2000 length 0x2000 00:08:00.043 Nvme3n1 : 6.08 157.20 9.82 0.00 0.00 660558.58 812.90 1090519.04 00:08:00.043 [2024-11-18T06:41:53.130Z] =================================================================================================================== 00:08:00.043 [2024-11-18T06:41:53.130Z] Total : 1768.12 110.51 0.00 0.00 885917.01 718.38 1961643.72 00:08:00.988 00:08:00.988 real 0m7.754s 00:08:00.988 user 0m14.700s 00:08:00.988 sys 0m0.281s 00:08:00.988 06:41:53 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.988 ************************************ 00:08:00.988 06:41:53 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:00.988 END TEST bdev_verify_big_io 00:08:00.988 ************************************ 00:08:00.988 06:41:53 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:00.988 06:41:53 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:00.988 06:41:53 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.988 06:41:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:00.988 ************************************ 00:08:00.988 START TEST bdev_write_zeroes 00:08:00.988 ************************************ 00:08:00.988 06:41:53 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:00.988 [2024-11-18 06:41:53.879090] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:08:00.988 [2024-11-18 06:41:53.879209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73923 ] 00:08:00.988 [2024-11-18 06:41:54.038314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.988 [2024-11-18 06:41:54.066804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.561 Running I/O for 1 seconds... 00:08:02.504 58624.00 IOPS, 229.00 MiB/s 00:08:02.504 Latency(us) 00:08:02.504 [2024-11-18T06:41:55.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:02.504 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:02.504 Nvme0n1 : 1.02 8395.97 32.80 0.00 0.00 15206.69 7259.37 30449.03 00:08:02.504 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:02.504 Nvme1n1p1 : 1.02 8385.53 32.76 0.00 0.00 15201.03 11494.01 25105.33 00:08:02.504 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:02.504 Nvme1n1p2 : 1.02 8375.21 32.72 0.00 0.00 15173.01 11695.66 25306.98 00:08:02.504 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:02.504 Nvme2n1 : 1.03 8365.76 32.68 0.00 0.00 15153.92 11746.07 23794.61 00:08:02.504 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:02.504 Nvme2n2 : 1.03 8356.20 32.64 0.00 0.00 15131.94 11645.24 23492.14 00:08:02.504 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:02.504 Nvme2n3 : 1.03 8396.23 32.80 0.00 0.00 15022.85 7965.14 23592.96 00:08:02.504 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:02.504 Nvme3n1 : 1.03 8281.88 32.35 0.00 0.00 15178.03 9981.64 23996.26 00:08:02.504 [2024-11-18T06:41:55.591Z] =================================================================================================================== 00:08:02.504 [2024-11-18T06:41:55.591Z] Total : 58556.78 228.74 0.00 0.00 15152.33 7259.37 30449.03 00:08:02.766 00:08:02.766 real 0m1.883s 00:08:02.766 user 0m1.546s 00:08:02.766 sys 0m0.217s 00:08:02.766 ************************************ 00:08:02.766 END TEST bdev_write_zeroes 00:08:02.766 06:41:55 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.766 06:41:55 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:02.766 ************************************ 00:08:02.766 06:41:55 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:02.766 06:41:55 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:02.766 06:41:55 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.766 06:41:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:02.766 ************************************ 00:08:02.766 START TEST bdev_json_nonenclosed 00:08:02.766 ************************************ 00:08:02.766 06:41:55 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:02.766 [2024-11-18 06:41:55.819616] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:08:02.766 [2024-11-18 06:41:55.819768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73965 ] 00:08:03.027 [2024-11-18 06:41:55.981925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.027 [2024-11-18 06:41:56.010657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.028 [2024-11-18 06:41:56.010763] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:03.028 [2024-11-18 06:41:56.010780] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:03.028 [2024-11-18 06:41:56.010792] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:03.028 00:08:03.028 real 0m0.336s 00:08:03.028 user 0m0.133s 00:08:03.028 sys 0m0.099s 00:08:03.028 06:41:56 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.028 ************************************ 00:08:03.028 END TEST bdev_json_nonenclosed 00:08:03.028 ************************************ 00:08:03.028 06:41:56 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:03.290 06:41:56 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:03.290 06:41:56 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:03.290 06:41:56 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.290 06:41:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:03.290 ************************************ 00:08:03.290 START TEST bdev_json_nonarray 00:08:03.290 ************************************ 00:08:03.290 06:41:56 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:03.290 [2024-11-18 06:41:56.221060] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:08:03.290 [2024-11-18 06:41:56.221209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73985 ] 00:08:03.551 [2024-11-18 06:41:56.382542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.551 [2024-11-18 06:41:56.402649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.551 [2024-11-18 06:41:56.402741] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:03.551 [2024-11-18 06:41:56.402757] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:03.551 [2024-11-18 06:41:56.402771] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:03.551 00:08:03.551 real 0m0.313s 00:08:03.551 user 0m0.123s 00:08:03.551 sys 0m0.085s 00:08:03.551 06:41:56 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.551 ************************************ 00:08:03.551 END TEST bdev_json_nonarray 00:08:03.551 06:41:56 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:03.551 ************************************ 00:08:03.551 06:41:56 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:08:03.551 06:41:56 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:08:03.551 06:41:56 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:03.551 06:41:56 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:03.551 06:41:56 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.552 06:41:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:03.552 ************************************ 00:08:03.552 START TEST bdev_gpt_uuid 00:08:03.552 ************************************ 00:08:03.552 06:41:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:08:03.552 06:41:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:08:03.552 06:41:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:08:03.552 06:41:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74005 00:08:03.552 06:41:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:03.552 06:41:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 74005 00:08:03.552 06:41:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 74005 ']' 00:08:03.552 06:41:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.552 06:41:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:03.552 06:41:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.552 06:41:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.552 06:41:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.552 06:41:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:03.552 [2024-11-18 06:41:56.607539] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:08:03.552 [2024-11-18 06:41:56.607698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74005 ] 00:08:03.813 [2024-11-18 06:41:56.763149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.813 [2024-11-18 06:41:56.792846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.384 06:41:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.384 06:41:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:08:04.384 06:41:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:04.384 06:41:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.384 06:41:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:04.956 Some configs were skipped because the RPC state that can call them passed over. 00:08:04.956 06:41:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.956 06:41:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:08:04.956 06:41:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.956 06:41:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:04.956 06:41:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.956 06:41:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:04.956 06:41:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.956 06:41:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:04.956 06:41:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.956 06:41:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:08:04.956 { 00:08:04.956 "name": "Nvme1n1p1", 00:08:04.956 "aliases": [ 00:08:04.956 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:04.956 ], 00:08:04.956 "product_name": "GPT Disk", 00:08:04.956 "block_size": 4096, 00:08:04.956 "num_blocks": 655104, 00:08:04.956 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:04.956 "assigned_rate_limits": { 00:08:04.956 "rw_ios_per_sec": 0, 00:08:04.956 "rw_mbytes_per_sec": 0, 00:08:04.956 "r_mbytes_per_sec": 0, 00:08:04.957 "w_mbytes_per_sec": 0 00:08:04.957 }, 00:08:04.957 "claimed": false, 00:08:04.957 "zoned": false, 00:08:04.957 "supported_io_types": { 00:08:04.957 "read": true, 00:08:04.957 "write": true, 00:08:04.957 "unmap": true, 00:08:04.957 "flush": true, 00:08:04.957 "reset": true, 00:08:04.957 "nvme_admin": false, 00:08:04.957 "nvme_io": false, 00:08:04.957 "nvme_io_md": false, 00:08:04.957 "write_zeroes": true, 00:08:04.957 "zcopy": false, 00:08:04.957 "get_zone_info": false, 00:08:04.957 "zone_management": false, 00:08:04.957 "zone_append": false, 00:08:04.957 "compare": true, 00:08:04.957 "compare_and_write": false, 00:08:04.957 "abort": true, 00:08:04.957 "seek_hole": false, 00:08:04.957 "seek_data": false, 00:08:04.957 "copy": true, 00:08:04.957 "nvme_iov_md": false 00:08:04.957 }, 00:08:04.957 "driver_specific": { 00:08:04.957 "gpt": { 00:08:04.957 "base_bdev": "Nvme1n1", 00:08:04.957 "offset_blocks": 256, 00:08:04.957 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:04.957 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:04.957 "partition_name": "SPDK_TEST_first" 00:08:04.957 } 00:08:04.957 } 00:08:04.957 } 00:08:04.957 ]' 00:08:04.957 06:41:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:08:04.957 06:41:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:08:04.957 06:41:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:08:04.957 06:41:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:04.957 06:41:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:04.957 06:41:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:04.957 06:41:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:04.957 06:41:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.957 06:41:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:04.957 06:41:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.957 06:41:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:08:04.957 { 00:08:04.957 "name": "Nvme1n1p2", 00:08:04.957 "aliases": [ 00:08:04.957 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:04.957 ], 00:08:04.957 "product_name": "GPT Disk", 00:08:04.957 "block_size": 4096, 00:08:04.957 "num_blocks": 655103, 00:08:04.957 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:04.957 "assigned_rate_limits": { 00:08:04.957 "rw_ios_per_sec": 0, 00:08:04.957 "rw_mbytes_per_sec": 0, 00:08:04.957 "r_mbytes_per_sec": 0, 00:08:04.957 "w_mbytes_per_sec": 0 00:08:04.957 }, 00:08:04.957 "claimed": false, 00:08:04.957 "zoned": false, 00:08:04.957 "supported_io_types": { 00:08:04.957 "read": true, 00:08:04.957 "write": true, 00:08:04.957 "unmap": true, 00:08:04.957 "flush": true, 00:08:04.957 "reset": true, 00:08:04.957 "nvme_admin": false, 00:08:04.957 "nvme_io": false, 00:08:04.957 "nvme_io_md": false, 00:08:04.957 "write_zeroes": true, 00:08:04.957 "zcopy": false, 00:08:04.957 "get_zone_info": false, 00:08:04.957 "zone_management": false, 00:08:04.957 "zone_append": false, 00:08:04.957 "compare": true, 00:08:04.957 "compare_and_write": false, 00:08:04.957 "abort": true, 00:08:04.957 "seek_hole": false, 00:08:04.957 "seek_data": false, 00:08:04.957 "copy": true, 00:08:04.957 "nvme_iov_md": false 00:08:04.957 }, 00:08:04.957 "driver_specific": { 00:08:04.957 "gpt": { 00:08:04.957 "base_bdev": "Nvme1n1", 00:08:04.957 "offset_blocks": 655360, 00:08:04.957 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:04.957 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:04.957 "partition_name": "SPDK_TEST_second" 00:08:04.957 } 00:08:04.957 } 00:08:04.957 } 00:08:04.957 ]' 00:08:04.957 06:41:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:08:04.957 06:41:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:08:04.957 06:41:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:08:04.957 06:41:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:04.957 06:41:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:04.957 06:41:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:04.957 06:41:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 74005 00:08:04.957 06:41:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 74005 ']' 00:08:04.957 06:41:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 74005 00:08:04.957 06:41:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:08:04.957 06:41:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.957 06:41:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74005 00:08:05.218 06:41:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:05.218 06:41:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:05.218 killing process with pid 74005 00:08:05.218 06:41:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74005' 00:08:05.218 06:41:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 74005 00:08:05.218 06:41:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 74005 00:08:05.480 00:08:05.480 real 0m1.827s 00:08:05.480 user 0m1.933s 00:08:05.480 sys 0m0.414s 00:08:05.480 06:41:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.480 ************************************ 00:08:05.480 END TEST bdev_gpt_uuid 00:08:05.480 ************************************ 00:08:05.480 06:41:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:05.480 06:41:58 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:08:05.480 06:41:58 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:05.480 06:41:58 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:08:05.480 06:41:58 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:05.480 06:41:58 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:05.480 06:41:58 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:05.480 06:41:58 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:05.480 06:41:58 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:05.480 06:41:58 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:05.741 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:06.002 Waiting for block devices as requested 00:08:06.002 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:06.002 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:06.263 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:06.263 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:11.571 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:11.571 06:42:04 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:11.571 06:42:04 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:11.571 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:11.571 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:11.571 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:11.571 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:11.571 06:42:04 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:11.571 00:08:11.571 real 0m50.001s 00:08:11.571 user 1m2.733s 00:08:11.571 sys 0m8.424s 00:08:11.571 ************************************ 00:08:11.571 END TEST blockdev_nvme_gpt 00:08:11.571 06:42:04 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.571 06:42:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:11.571 ************************************ 00:08:11.571 06:42:04 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:11.571 06:42:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:11.571 06:42:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.571 06:42:04 -- common/autotest_common.sh@10 -- # set +x 00:08:11.571 ************************************ 00:08:11.571 START TEST nvme 00:08:11.571 ************************************ 00:08:11.571 06:42:04 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:11.832 * Looking for test storage... 00:08:11.832 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:11.832 06:42:04 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:11.832 06:42:04 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:11.832 06:42:04 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:08:11.832 06:42:04 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:11.832 06:42:04 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.832 06:42:04 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.832 06:42:04 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.832 06:42:04 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.832 06:42:04 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.832 06:42:04 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.832 06:42:04 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.832 06:42:04 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.832 06:42:04 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.832 06:42:04 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.832 06:42:04 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.832 06:42:04 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:11.832 06:42:04 nvme -- scripts/common.sh@345 -- # : 1 00:08:11.832 06:42:04 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.832 06:42:04 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.832 06:42:04 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:11.832 06:42:04 nvme -- scripts/common.sh@353 -- # local d=1 00:08:11.832 06:42:04 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.832 06:42:04 nvme -- scripts/common.sh@355 -- # echo 1 00:08:11.832 06:42:04 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.832 06:42:04 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:11.832 06:42:04 nvme -- scripts/common.sh@353 -- # local d=2 00:08:11.832 06:42:04 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.832 06:42:04 nvme -- scripts/common.sh@355 -- # echo 2 00:08:11.832 06:42:04 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.832 06:42:04 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.832 06:42:04 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.832 06:42:04 nvme -- scripts/common.sh@368 -- # return 0 00:08:11.832 06:42:04 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.832 06:42:04 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:11.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.832 --rc genhtml_branch_coverage=1 00:08:11.832 --rc genhtml_function_coverage=1 00:08:11.832 --rc genhtml_legend=1 00:08:11.832 --rc geninfo_all_blocks=1 00:08:11.832 --rc geninfo_unexecuted_blocks=1 00:08:11.832 00:08:11.832 ' 00:08:11.832 06:42:04 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:11.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.832 --rc genhtml_branch_coverage=1 00:08:11.832 --rc genhtml_function_coverage=1 00:08:11.832 --rc genhtml_legend=1 00:08:11.832 --rc geninfo_all_blocks=1 00:08:11.832 --rc geninfo_unexecuted_blocks=1 00:08:11.832 00:08:11.832 ' 00:08:11.832 06:42:04 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:11.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.832 --rc genhtml_branch_coverage=1 00:08:11.832 --rc genhtml_function_coverage=1 00:08:11.832 --rc genhtml_legend=1 00:08:11.832 --rc geninfo_all_blocks=1 00:08:11.832 --rc geninfo_unexecuted_blocks=1 00:08:11.832 00:08:11.832 ' 00:08:11.832 06:42:04 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:11.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.832 --rc genhtml_branch_coverage=1 00:08:11.832 --rc genhtml_function_coverage=1 00:08:11.832 --rc genhtml_legend=1 00:08:11.832 --rc geninfo_all_blocks=1 00:08:11.832 --rc geninfo_unexecuted_blocks=1 00:08:11.832 00:08:11.832 ' 00:08:11.832 06:42:04 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:12.405 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:12.977 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:12.977 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:12.977 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:12.977 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:12.977 06:42:05 nvme -- nvme/nvme.sh@79 -- # uname 00:08:12.977 06:42:05 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:12.977 06:42:05 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:12.977 06:42:05 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:12.977 06:42:05 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:12.977 06:42:05 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:08:12.977 06:42:05 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:08:12.977 Waiting for stub to ready for secondary processes... 00:08:12.977 06:42:05 nvme -- common/autotest_common.sh@1075 -- # stubpid=74634 00:08:12.977 06:42:05 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:08:12.977 06:42:05 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:12.977 06:42:05 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:12.977 06:42:05 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/74634 ]] 00:08:12.977 06:42:05 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:12.977 [2024-11-18 06:42:05.993101] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:08:12.977 [2024-11-18 06:42:05.993240] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:13.921 06:42:06 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:13.921 06:42:06 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/74634 ]] 00:08:13.921 06:42:06 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:14.182 [2024-11-18 06:42:07.096412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:14.182 [2024-11-18 06:42:07.114259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.182 [2024-11-18 06:42:07.114512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.182 [2024-11-18 06:42:07.114565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:14.182 [2024-11-18 06:42:07.127746] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:14.182 [2024-11-18 06:42:07.127799] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:14.182 [2024-11-18 06:42:07.142517] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:14.182 [2024-11-18 06:42:07.142694] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:14.182 [2024-11-18 06:42:07.144914] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:14.182 [2024-11-18 06:42:07.145258] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:14.182 [2024-11-18 06:42:07.145402] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:14.182 [2024-11-18 06:42:07.146411] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:14.182 [2024-11-18 06:42:07.146751] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:14.182 [2024-11-18 06:42:07.146828] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:14.182 [2024-11-18 06:42:07.149141] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:14.182 [2024-11-18 06:42:07.149445] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:14.182 [2024-11-18 06:42:07.149539] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:14.182 [2024-11-18 06:42:07.149633] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:14.182 [2024-11-18 06:42:07.149713] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:15.122 done. 00:08:15.122 06:42:07 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:15.122 06:42:07 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:08:15.122 06:42:07 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:15.122 06:42:07 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:08:15.122 06:42:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.122 06:42:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:15.122 ************************************ 00:08:15.122 START TEST nvme_reset 00:08:15.122 ************************************ 00:08:15.122 06:42:07 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:15.122 Initializing NVMe Controllers 00:08:15.122 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:15.122 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:15.122 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:15.122 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:15.122 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:15.122 00:08:15.122 real 0m0.222s 00:08:15.122 user 0m0.071s 00:08:15.122 sys 0m0.102s 00:08:15.122 ************************************ 00:08:15.122 END TEST nvme_reset 00:08:15.122 ************************************ 00:08:15.122 06:42:08 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.122 06:42:08 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:15.384 06:42:08 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:15.384 06:42:08 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:15.384 06:42:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.384 06:42:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:15.384 ************************************ 00:08:15.384 START TEST nvme_identify 00:08:15.384 ************************************ 00:08:15.384 06:42:08 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:08:15.384 06:42:08 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:15.384 06:42:08 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:15.384 06:42:08 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:15.384 06:42:08 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:15.384 06:42:08 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:15.384 06:42:08 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:08:15.384 06:42:08 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:15.384 06:42:08 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:15.384 06:42:08 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:15.384 06:42:08 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:15.384 06:42:08 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:15.384 06:42:08 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:15.649 [2024-11-18 06:42:08.510537] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 74668 terminated unexpected 00:08:15.649 ===================================================== 00:08:15.649 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:15.649 ===================================================== 00:08:15.649 Controller Capabilities/Features 00:08:15.649 ================================ 00:08:15.649 Vendor ID: 1b36 00:08:15.649 Subsystem Vendor ID: 1af4 00:08:15.649 Serial Number: 12341 00:08:15.649 Model Number: QEMU NVMe Ctrl 00:08:15.649 Firmware Version: 8.0.0 00:08:15.649 Recommended Arb Burst: 6 00:08:15.649 IEEE OUI Identifier: 00 54 52 00:08:15.649 Multi-path I/O 00:08:15.649 May have multiple subsystem ports: No 00:08:15.649 May have multiple controllers: No 00:08:15.649 Associated with SR-IOV VF: No 00:08:15.649 Max Data Transfer Size: 524288 00:08:15.649 Max Number of Namespaces: 256 00:08:15.649 Max Number of I/O Queues: 64 00:08:15.649 NVMe Specification Version (VS): 1.4 00:08:15.649 NVMe Specification Version (Identify): 1.4 00:08:15.649 Maximum Queue Entries: 2048 00:08:15.649 Contiguous Queues Required: Yes 00:08:15.649 Arbitration Mechanisms Supported 00:08:15.649 Weighted Round Robin: Not Supported 00:08:15.649 Vendor Specific: Not Supported 00:08:15.649 Reset Timeout: 7500 ms 00:08:15.649 Doorbell Stride: 4 bytes 00:08:15.649 NVM Subsystem Reset: Not Supported 00:08:15.649 Command Sets Supported 00:08:15.649 NVM Command Set: Supported 00:08:15.649 Boot Partition: Not Supported 00:08:15.649 Memory Page Size Minimum: 4096 bytes 00:08:15.649 Memory Page Size Maximum: 65536 bytes 00:08:15.649 Persistent Memory Region: Not Supported 00:08:15.649 Optional Asynchronous Events Supported 00:08:15.649 Namespace Attribute Notices: Supported 00:08:15.649 Firmware Activation Notices: Not Supported 00:08:15.649 ANA Change Notices: Not Supported 00:08:15.649 PLE Aggregate Log Change Notices: Not Supported 00:08:15.649 LBA Status Info Alert Notices: Not Supported 00:08:15.649 EGE Aggregate Log Change Notices: Not Supported 00:08:15.649 Normal NVM Subsystem Shutdown event: Not Supported 00:08:15.649 Zone Descriptor Change Notices: Not Supported 00:08:15.649 Discovery Log Change Notices: Not Supported 00:08:15.649 Controller Attributes 00:08:15.649 128-bit Host Identifier: Not Supported 00:08:15.649 Non-Operational Permissive Mode: Not Supported 00:08:15.649 NVM Sets: Not Supported 00:08:15.649 Read Recovery Levels: Not Supported 00:08:15.649 Endurance Groups: Not Supported 00:08:15.649 Predictable Latency Mode: Not Supported 00:08:15.649 Traffic Based Keep ALive: Not Supported 00:08:15.649 Namespace Granularity: Not Supported 00:08:15.649 SQ Associations: Not Supported 00:08:15.649 UUID List: Not Supported 00:08:15.649 Multi-Domain Subsystem: Not Supported 00:08:15.649 Fixed Capacity Management: Not Supported 00:08:15.649 Variable Capacity Management: Not Supported 00:08:15.649 Delete Endurance Group: Not Supported 00:08:15.649 Delete NVM Set: Not Supported 00:08:15.649 Extended LBA Formats Supported: Supported 00:08:15.649 Flexible Data Placement Supported: Not Supported 00:08:15.649 00:08:15.649 Controller Memory Buffer Support 00:08:15.649 ================================ 00:08:15.649 Supported: No 00:08:15.649 00:08:15.649 Persistent Memory Region Support 00:08:15.649 ================================ 00:08:15.649 Supported: No 00:08:15.649 00:08:15.649 Admin Command Set Attributes 00:08:15.649 ============================ 00:08:15.649 Security Send/Receive: Not Supported 00:08:15.649 Format NVM: Supported 00:08:15.649 Firmware Activate/Download: Not Supported 00:08:15.649 Namespace Management: Supported 00:08:15.649 Device Self-Test: Not Supported 00:08:15.649 Directives: Supported 00:08:15.649 NVMe-MI: Not Supported 00:08:15.649 Virtualization Management: Not Supported 00:08:15.649 Doorbell Buffer Config: Supported 00:08:15.649 Get LBA Status Capability: Not Supported 00:08:15.649 Command & Feature Lockdown Capability: Not Supported 00:08:15.649 Abort Command Limit: 4 00:08:15.649 Async Event Request Limit: 4 00:08:15.649 Number of Firmware Slots: N/A 00:08:15.649 Firmware Slot 1 Read-Only: N/A 00:08:15.649 Firmware Activation Without Reset: N/A 00:08:15.649 Multiple Update Detection Support: N/A 00:08:15.649 Firmware Update Granularity: No Information Provided 00:08:15.649 Per-Namespace SMART Log: Yes 00:08:15.649 Asymmetric Namespace Access Log Page: Not Supported 00:08:15.649 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:15.649 Command Effects Log Page: Supported 00:08:15.649 Get Log Page Extended Data: Supported 00:08:15.649 Telemetry Log Pages: Not Supported 00:08:15.649 Persistent Event Log Pages: Not Supported 00:08:15.649 Supported Log Pages Log Page: May Support 00:08:15.649 Commands Supported & Effects Log Page: Not Supported 00:08:15.649 Feature Identifiers & Effects Log Page:May Support 00:08:15.649 NVMe-MI Commands & Effects Log Page: May Support 00:08:15.649 Data Area 4 for Telemetry Log: Not Supported 00:08:15.649 Error Log Page Entries Supported: 1 00:08:15.649 Keep Alive: Not Supported 00:08:15.649 00:08:15.649 NVM Command Set Attributes 00:08:15.649 ========================== 00:08:15.649 Submission Queue Entry Size 00:08:15.649 Max: 64 00:08:15.649 Min: 64 00:08:15.649 Completion Queue Entry Size 00:08:15.649 Max: 16 00:08:15.649 Min: 16 00:08:15.649 Number of Namespaces: 256 00:08:15.649 Compare Command: Supported 00:08:15.649 Write Uncorrectable Command: Not Supported 00:08:15.649 Dataset Management Command: Supported 00:08:15.649 Write Zeroes Command: Supported 00:08:15.649 Set Features Save Field: Supported 00:08:15.649 Reservations: Not Supported 00:08:15.649 Timestamp: Supported 00:08:15.649 Copy: Supported 00:08:15.649 Volatile Write Cache: Present 00:08:15.649 Atomic Write Unit (Normal): 1 00:08:15.649 Atomic Write Unit (PFail): 1 00:08:15.649 Atomic Compare & Write Unit: 1 00:08:15.649 Fused Compare & Write: Not Supported 00:08:15.649 Scatter-Gather List 00:08:15.649 SGL Command Set: Supported 00:08:15.649 SGL Keyed: Not Supported 00:08:15.649 SGL Bit Bucket Descriptor: Not Supported 00:08:15.649 SGL Metadata Pointer: Not Supported 00:08:15.650 Oversized SGL: Not Supported 00:08:15.650 SGL Metadata Address: Not Supported 00:08:15.650 SGL Offset: Not Supported 00:08:15.650 Transport SGL Data Block: Not Supported 00:08:15.650 Replay Protected Memory Block: Not Supported 00:08:15.650 00:08:15.650 Firmware Slot Information 00:08:15.650 ========================= 00:08:15.650 Active slot: 1 00:08:15.650 Slot 1 Firmware Revision: 1.0 00:08:15.650 00:08:15.650 00:08:15.650 Commands Supported and Effects 00:08:15.650 ============================== 00:08:15.650 Admin Commands 00:08:15.650 -------------- 00:08:15.650 Delete I/O Submission Queue (00h): Supported 00:08:15.650 Create I/O Submission Queue (01h): Supported 00:08:15.650 Get Log Page (02h): Supported 00:08:15.650 Delete I/O Completion Queue (04h): Supported 00:08:15.650 Create I/O Completion Queue (05h): Supported 00:08:15.650 Identify (06h): Supported 00:08:15.650 Abort (08h): Supported 00:08:15.650 Set Features (09h): Supported 00:08:15.650 Get Features (0Ah): Supported 00:08:15.650 Asynchronous Event Request (0Ch): Supported 00:08:15.650 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:15.650 Directive Send (19h): Supported 00:08:15.650 Directive Receive (1Ah): Supported 00:08:15.650 Virtualization Management (1Ch): Supported 00:08:15.650 Doorbell Buffer Config (7Ch): Supported 00:08:15.650 Format NVM (80h): Supported LBA-Change 00:08:15.650 I/O Commands 00:08:15.650 ------------ 00:08:15.650 Flush (00h): Supported LBA-Change 00:08:15.650 Write (01h): Supported LBA-Change 00:08:15.650 Read (02h): Supported 00:08:15.650 Compare (05h): Supported 00:08:15.650 Write Zeroes (08h): Supported LBA-Change 00:08:15.650 Dataset Management (09h): Supported LBA-Change 00:08:15.650 Unknown (0Ch): Supported 00:08:15.650 Unknown (12h): Supported 00:08:15.650 Copy (19h): Supported LBA-Change 00:08:15.650 Unknown (1Dh): Supported LBA-Change 00:08:15.650 00:08:15.650 Error Log 00:08:15.650 ========= 00:08:15.650 00:08:15.650 Arbitration 00:08:15.650 =========== 00:08:15.650 Arbitration Burst: no limit 00:08:15.650 00:08:15.650 Power Management 00:08:15.650 ================ 00:08:15.650 Number of Power States: 1 00:08:15.650 Current Power State: Power State #0 00:08:15.650 Power State #0: 00:08:15.650 Max Power: 25.00 W 00:08:15.650 Non-Operational State: Operational 00:08:15.650 Entry Latency: 16 microseconds 00:08:15.650 Exit Latency: 4 microseconds 00:08:15.650 Relative Read Throughput: 0 00:08:15.650 Relative Read Latency: 0 00:08:15.650 Relative Write Throughput: 0 00:08:15.650 Relative Write Latency: 0 00:08:15.650 Idle Power: Not Reported 00:08:15.650 Active Power: Not Reported 00:08:15.650 Non-Operational Permissive Mode: Not Supported 00:08:15.650 00:08:15.650 Health Information 00:08:15.650 ================== 00:08:15.650 Critical Warnings: 00:08:15.650 Available Spare Space: OK 00:08:15.650 Temperature: OK 00:08:15.650 Device Reliability: OK 00:08:15.650 Read Only: No 00:08:15.650 Volatile Memory Backup: OK 00:08:15.650 Current Temperature: 323 Kelvin (50 Celsius) 00:08:15.650 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:15.650 Available Spare: 0% 00:08:15.650 Available Spare Threshold: 0% 00:08:15.650 Life Percentage Used: 0% 00:08:15.650 Data Units Read: 1070 00:08:15.650 Data Units Written: 943 00:08:15.650 Host Read Commands: 55114 00:08:15.650 Host Write Commands: 54000 00:08:15.650 Controller Busy Time: 0 minutes 00:08:15.650 Power Cycles: 0 00:08:15.650 Power On Hours: 0 hours 00:08:15.650 Unsafe Shutdowns: 0 00:08:15.650 Unrecoverable Media Errors: 0 00:08:15.650 Lifetime Error Log Entries: 0 00:08:15.650 Warning Temperature Time: 0 minutes 00:08:15.650 Critical Temperature Time: 0 minutes 00:08:15.650 00:08:15.650 Number of Queues 00:08:15.650 ================ 00:08:15.650 Number of I/O Submission Queues: 64 00:08:15.650 Number of I/O Completion Queues: 64 00:08:15.650 00:08:15.650 ZNS Specific Controller Data 00:08:15.650 ============================ 00:08:15.650 Zone Append Size Limit: 0 00:08:15.650 00:08:15.650 00:08:15.650 Active Namespaces 00:08:15.650 ================= 00:08:15.650 Namespace ID:1 00:08:15.650 Error Recovery Timeout: Unlimited 00:08:15.650 Command Set Identifier: NVM (00h) 00:08:15.650 Deallocate: Supported 00:08:15.650 Deallocated/Unwritten Error: Supported 00:08:15.650 Deallocated Read Value: All 0x00 00:08:15.650 Deallocate in Write Zeroes: Not Supported 00:08:15.650 Deallocated Guard Field: 0xFFFF 00:08:15.650 Flush: Supported 00:08:15.650 Reservation: Not Supported 00:08:15.650 Namespace Sharing Capabilities: Private 00:08:15.650 Size (in LBAs): 1310720 (5GiB) 00:08:15.650 Capacity (in LBAs): 1310720 (5GiB) 00:08:15.650 Utilization (in LBAs): 1310720 (5GiB) 00:08:15.650 Thin Provisioning: Not Supported 00:08:15.650 Per-NS Atomic Units: No 00:08:15.650 Maximum Single Source Range Length: 128 00:08:15.650 Maximum Copy Length: 128 00:08:15.650 Maximum Source Range Count: 128 00:08:15.650 NGUID/EUI64 Never Reused: No 00:08:15.650 Namespace Write Protected: No 00:08:15.650 Number of LBA Formats: 8 00:08:15.650 Current LBA Format: LBA Format #04 00:08:15.650 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:15.650 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:15.650 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:15.650 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:15.650 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:15.650 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:15.650 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:15.650 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:15.650 00:08:15.650 NVM Specific Namespace Data 00:08:15.650 =========================== 00:08:15.650 Logical Block Storage Tag Mask: 0 00:08:15.650 Protection Information Capabilities: 00:08:15.650 16b Guard Protection Information Storage Tag Support: No 00:08:15.650 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:15.650 Storage Tag Check Read Support: No 00:08:15.650 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.650 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.650 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.650 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.650 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.650 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.650 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.650 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.650 ===================================================== 00:08:15.650 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:15.650 ===================================================== 00:08:15.650 Controller Capabilities/Features 00:08:15.650 ================================ 00:08:15.650 Vendor ID: 1b36 00:08:15.650 Subsystem Vendor ID: 1af4 00:08:15.650 Serial Number: 12343 00:08:15.650 Model Number: QEMU NVMe Ctrl 00:08:15.650 Firmware Version: 8.0.0 00:08:15.650 Recommended Arb Burst: 6 00:08:15.650 IEEE OUI Identifier: 00 54 52 00:08:15.650 Multi-path I/O 00:08:15.650 May have multiple subsystem ports: No 00:08:15.650 May have multiple controllers: Yes 00:08:15.650 Associated with SR-IOV VF: No 00:08:15.650 Max Data Transfer Size: 524288 00:08:15.650 Max Number of Namespaces: 256 00:08:15.650 Max Number of I/O Queues: 64 00:08:15.650 NVMe Specification Version (VS): 1.4 00:08:15.650 NVMe Specification Version (Identify): 1.4 00:08:15.650 Maximum Queue Entries: 2048 00:08:15.650 Contiguous Queues Required: Yes 00:08:15.650 Arbitration Mechanisms Supported 00:08:15.650 Weighted Round Robin: Not Supported 00:08:15.650 Vendor Specific: Not Supported 00:08:15.650 Reset Timeout: 7500 ms 00:08:15.650 Doorbell Stride: 4 bytes 00:08:15.650 NVM Subsystem Reset: Not Supported 00:08:15.650 Command Sets Supported 00:08:15.650 NVM Command Set: Supported 00:08:15.650 Boot Partition: Not Supported 00:08:15.650 Memory Page Size Minimum: 4096 bytes 00:08:15.650 Memory Page Size Maximum: 65536 bytes 00:08:15.650 Persistent Memory Region: Not Supported 00:08:15.650 Optional Asynchronous Events Supported 00:08:15.650 Namespace Attribute Notices: Supported 00:08:15.650 Firmware Activation Notices: Not Supported 00:08:15.650 ANA Change Notices: Not Supported 00:08:15.650 PLE Aggregate Log Change Notices: Not Supported 00:08:15.651 LBA Status Info Alert Notices: Not Supported 00:08:15.651 EGE Aggregate Log Change Notices: Not Supported 00:08:15.651 Normal NVM Subsystem Shutdown event: Not Supported 00:08:15.651 Zone Descriptor Change Notices: Not Supported 00:08:15.651 Discovery Log Change Notices: Not Supported 00:08:15.651 Controller Attributes 00:08:15.651 128-bit Host Identifier: Not Supported 00:08:15.651 Non-Operational Permissive Mode: Not Supported 00:08:15.651 NVM Sets: Not Supported 00:08:15.651 Read Recovery Levels: Not Supported 00:08:15.651 Endurance Groups: Supported 00:08:15.651 Predictable Latency Mode: Not Supported 00:08:15.651 Traffic Based Keep ALive: Not Supported 00:08:15.651 Namespace Granularity: Not Supported 00:08:15.651 SQ Associations: Not Supported 00:08:15.651 UUID List: Not Supported 00:08:15.651 Multi-Domain Subsystem: Not Supported 00:08:15.651 Fixed Capacity Management: Not Supported 00:08:15.651 Variable Capacity Management: Not Supported 00:08:15.651 Delete Endurance Group: Not Supported 00:08:15.651 Delete NVM Set: Not Supported 00:08:15.651 Extended LBA Formats Supported: Supported 00:08:15.651 Flexible Data Placement Supported: Supported 00:08:15.651 00:08:15.651 Controller Memory Buffer Support 00:08:15.651 ================================ 00:08:15.651 Supported: No 00:08:15.651 00:08:15.651 Persistent Memory Region Support 00:08:15.651 ================================ 00:08:15.651 Supported: No 00:08:15.651 00:08:15.651 Admin Command Set Attributes 00:08:15.651 ============================ 00:08:15.651 Security Send/Receive: Not Supported 00:08:15.651 Format NVM: Supported 00:08:15.651 Firmware Activate/Download: Not Supported 00:08:15.651 Namespace Management: Supported 00:08:15.651 Device Self-Test: Not Supported 00:08:15.651 Directives: Supported 00:08:15.651 NVMe-MI: Not Supported 00:08:15.651 Virtualization Management: Not Supported 00:08:15.651 Doorbell Buffer Config: Supported 00:08:15.651 Get LBA Status Capability: Not Supported 00:08:15.651 Command & Feature Lockdown Capability: Not Supported 00:08:15.651 Abort Command Limit: 4 00:08:15.651 Async Event Request Limit: 4 00:08:15.651 Number of Firmware Slots: N/A 00:08:15.651 Firmware Slot 1 Read-Only: N/A 00:08:15.651 Firmware Activation Without Reset: N/A 00:08:15.651 Multiple Update Detection Support: N/A 00:08:15.651 Firmware Update Granularity: No Information Provided 00:08:15.651 Per-Namespace SMART Log: Yes 00:08:15.651 Asymmetric Namespace Access Log Page: Not Supported 00:08:15.651 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:15.651 Command Effects Log Page: Supported 00:08:15.651 Get Log Page Extended Data: Supported 00:08:15.651 Telemetry Log Pages: Not Supported 00:08:15.651 Persistent Event Log Pages: Not Supported 00:08:15.651 Supported Log Pages Log Page: May Support 00:08:15.651 Commands Supported & Effects Log Page: Not Supported 00:08:15.651 Feature Identifiers & Effects Log Page:May Support 00:08:15.651 NVMe-MI Commands & Effects Log Page: May Support 00:08:15.651 Data Area 4 for Telemetry Log: Not Supported 00:08:15.651 Error Log Page Entries Supported: 1 00:08:15.651 Keep Alive: Not Supported 00:08:15.651 00:08:15.651 NVM Command Set Attributes 00:08:15.651 ========================== 00:08:15.651 Submission Queue Entry Size 00:08:15.651 Max: 64 00:08:15.651 Min: 64 00:08:15.651 Completion Queue Entry Size 00:08:15.651 Max: 16 00:08:15.651 Min: 16 00:08:15.651 Number of Namespaces: 256 00:08:15.651 Compare Command: Supported 00:08:15.651 Write Uncorrectable Command: Not Supported 00:08:15.651 Dataset Management Command: Supported 00:08:15.651 Write Zeroes Command: Supported 00:08:15.651 Set Features Save Field: Supported 00:08:15.651 Reservations: Not Supported 00:08:15.651 Timestamp: Supported 00:08:15.651 Copy: Supported 00:08:15.651 Volatile Write Cache: Present 00:08:15.651 Atomic Write Unit (Normal): 1 00:08:15.651 Atomic Write Unit (PFail): 1 00:08:15.651 Atomic Compare & Write Unit: 1 00:08:15.651 Fused Compare & Write: Not Supported 00:08:15.651 Scatter-Gather List 00:08:15.651 SGL Command Set: Supported 00:08:15.651 SGL Keyed: Not Supported 00:08:15.651 SGL Bit Bucket Descriptor: Not Supported 00:08:15.651 SGL Metadata Pointer: Not Supported 00:08:15.651 Oversized SGL: Not Supported 00:08:15.651 SGL Metadata Address: Not Supported 00:08:15.651 SGL Offset: Not Supported 00:08:15.651 Transport SGL Data Block: Not Supported 00:08:15.651 Replay Protected Memory Block: Not Supported 00:08:15.651 00:08:15.651 Firmware Slot Information 00:08:15.651 ========================= 00:08:15.651 Active slot: 1 00:08:15.651 Slot 1 Firmware Revision: 1.0 00:08:15.651 00:08:15.651 00:08:15.651 Commands Supported and Effects 00:08:15.651 ============================== 00:08:15.651 Admin Commands 00:08:15.651 -------------- 00:08:15.651 Delete I/O Submission Queue (00h): Supported 00:08:15.651 Create I/O Submission Queue (01h): Supported 00:08:15.651 Get Log Page (02h): Supported 00:08:15.651 Delete I/O Completion Queue (04h): Supported 00:08:15.651 Create I/O Completion Queue (05h): Supported 00:08:15.651 Identify (06h): Supported 00:08:15.651 Abort (08h): Supported 00:08:15.651 Set Features (09h): Supported 00:08:15.651 Get Features (0Ah): Supported 00:08:15.651 Asynchronous Event Request (0Ch): Supported 00:08:15.651 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:15.651 Directive Send (19h): Supported 00:08:15.651 Directive Receive (1Ah): Supported 00:08:15.651 Virtualization Management (1Ch): Supported 00:08:15.651 Doorbell Buffer Config (7Ch): Supported 00:08:15.651 Format NVM (80h): Supported LBA-Change 00:08:15.651 I/O Commands 00:08:15.651 ------------ 00:08:15.651 Flush (00h): Supported LBA-Change 00:08:15.651 Write (01h): Supported LBA-Change 00:08:15.651 Read (02h): Supported 00:08:15.651 Compare (05h): Supported 00:08:15.651 Write Zeroes (08h): Supported LBA-Change 00:08:15.651 Dataset Management (09h): Supported LBA-Change 00:08:15.651 Unknown (0Ch): Supported 00:08:15.651 Unknown (12h): Supported 00:08:15.651 Copy (19h): Supported LBA-Change 00:08:15.651 Unknown (1Dh): Supported LBA-Change 00:08:15.651 00:08:15.651 Error Log 00:08:15.651 ========= 00:08:15.651 00:08:15.651 Arbitration 00:08:15.651 =========== 00:08:15.651 Arbitration Burst: no limit 00:08:15.651 00:08:15.651 Power Management 00:08:15.651 ================ 00:08:15.651 Number of Power States: 1 00:08:15.651 Current Power State: Power State #0 00:08:15.651 Power State #0: 00:08:15.651 Max Power: 25.00 W 00:08:15.651 Non-Operational State: Operational 00:08:15.651 Entry Latency: 16 microseconds 00:08:15.651 Exit Latency: 4 microseconds 00:08:15.651 Relative Read Throughput: 0 00:08:15.651 Relative Read Latency: 0 00:08:15.651 Relative Write Throughput: 0 00:08:15.651 Relative Write Latency: 0 00:08:15.651 Idle Power: Not Reported 00:08:15.651 Active Power: Not Reported 00:08:15.651 Non-Operational Permissive Mode: Not Supported 00:08:15.651 00:08:15.651 Health Information 00:08:15.651 ================== 00:08:15.651 Critical Warnings: 00:08:15.651 Available Spare Space: OK 00:08:15.651 Temperature: OK 00:08:15.651 Device Reliability: OK 00:08:15.651 Read Only: [2024-11-18 06:42:08.512641] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 74668 terminated unexpected 00:08:15.651 No 00:08:15.651 Volatile Memory Backup: OK 00:08:15.651 Current Temperature: 323 Kelvin (50 Celsius) 00:08:15.651 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:15.651 Available Spare: 0% 00:08:15.651 Available Spare Threshold: 0% 00:08:15.651 Life Percentage Used: 0% 00:08:15.651 Data Units Read: 871 00:08:15.651 Data Units Written: 800 00:08:15.651 Host Read Commands: 39580 00:08:15.651 Host Write Commands: 39003 00:08:15.651 Controller Busy Time: 0 minutes 00:08:15.651 Power Cycles: 0 00:08:15.651 Power On Hours: 0 hours 00:08:15.651 Unsafe Shutdowns: 0 00:08:15.651 Unrecoverable Media Errors: 0 00:08:15.651 Lifetime Error Log Entries: 0 00:08:15.651 Warning Temperature Time: 0 minutes 00:08:15.651 Critical Temperature Time: 0 minutes 00:08:15.651 00:08:15.651 Number of Queues 00:08:15.651 ================ 00:08:15.651 Number of I/O Submission Queues: 64 00:08:15.651 Number of I/O Completion Queues: 64 00:08:15.651 00:08:15.651 ZNS Specific Controller Data 00:08:15.651 ============================ 00:08:15.651 Zone Append Size Limit: 0 00:08:15.651 00:08:15.651 00:08:15.651 Active Namespaces 00:08:15.651 ================= 00:08:15.651 Namespace ID:1 00:08:15.651 Error Recovery Timeout: Unlimited 00:08:15.651 Command Set Identifier: NVM (00h) 00:08:15.652 Deallocate: Supported 00:08:15.652 Deallocated/Unwritten Error: Supported 00:08:15.652 Deallocated Read Value: All 0x00 00:08:15.652 Deallocate in Write Zeroes: Not Supported 00:08:15.652 Deallocated Guard Field: 0xFFFF 00:08:15.652 Flush: Supported 00:08:15.652 Reservation: Not Supported 00:08:15.652 Namespace Sharing Capabilities: Multiple Controllers 00:08:15.652 Size (in LBAs): 262144 (1GiB) 00:08:15.652 Capacity (in LBAs): 262144 (1GiB) 00:08:15.652 Utilization (in LBAs): 262144 (1GiB) 00:08:15.652 Thin Provisioning: Not Supported 00:08:15.652 Per-NS Atomic Units: No 00:08:15.652 Maximum Single Source Range Length: 128 00:08:15.652 Maximum Copy Length: 128 00:08:15.652 Maximum Source Range Count: 128 00:08:15.652 NGUID/EUI64 Never Reused: No 00:08:15.652 Namespace Write Protected: No 00:08:15.652 Endurance group ID: 1 00:08:15.652 Number of LBA Formats: 8 00:08:15.652 Current LBA Format: LBA Format #04 00:08:15.652 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:15.652 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:15.652 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:15.652 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:15.652 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:15.652 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:15.652 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:15.652 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:15.652 00:08:15.652 Get Feature FDP: 00:08:15.652 ================ 00:08:15.652 Enabled: Yes 00:08:15.652 FDP configuration index: 0 00:08:15.652 00:08:15.652 FDP configurations log page 00:08:15.652 =========================== 00:08:15.652 Number of FDP configurations: 1 00:08:15.652 Version: 0 00:08:15.652 Size: 112 00:08:15.652 FDP Configuration Descriptor: 0 00:08:15.652 Descriptor Size: 96 00:08:15.652 Reclaim Group Identifier format: 2 00:08:15.652 FDP Volatile Write Cache: Not Present 00:08:15.652 FDP Configuration: Valid 00:08:15.652 Vendor Specific Size: 0 00:08:15.652 Number of Reclaim Groups: 2 00:08:15.652 Number of Recalim Unit Handles: 8 00:08:15.652 Max Placement Identifiers: 128 00:08:15.652 Number of Namespaces Suppprted: 256 00:08:15.652 Reclaim unit Nominal Size: 6000000 bytes 00:08:15.652 Estimated Reclaim Unit Time Limit: Not Reported 00:08:15.652 RUH Desc #000: RUH Type: Initially Isolated 00:08:15.652 RUH Desc #001: RUH Type: Initially Isolated 00:08:15.652 RUH Desc #002: RUH Type: Initially Isolated 00:08:15.652 RUH Desc #003: RUH Type: Initially Isolated 00:08:15.652 RUH Desc #004: RUH Type: Initially Isolated 00:08:15.652 RUH Desc #005: RUH Type: Initially Isolated 00:08:15.652 RUH Desc #006: RUH Type: Initially Isolated 00:08:15.652 RUH Desc #007: RUH Type: Initially Isolated 00:08:15.652 00:08:15.652 FDP reclaim unit handle usage log page 00:08:15.652 ====================================== 00:08:15.652 Number of Reclaim Unit Handles: 8 00:08:15.652 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:15.652 RUH Usage Desc #001: RUH Attributes: Unused 00:08:15.652 RUH Usage Desc #002: RUH Attributes: Unused 00:08:15.652 RUH Usage Desc #003: RUH Attributes: Unused 00:08:15.652 RUH Usage Desc #004: RUH Attributes: Unused 00:08:15.652 R[2024-11-18 06:42:08.516358] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 74668 terminated unexpected 00:08:15.652 UH Usage Desc #005: RUH Attributes: Unused 00:08:15.652 RUH Usage Desc #006: RUH Attributes: Unused 00:08:15.652 RUH Usage Desc #007: RUH Attributes: Unused 00:08:15.652 00:08:15.652 FDP statistics log page 00:08:15.652 ======================= 00:08:15.652 Host bytes with metadata written: 504930304 00:08:15.652 Media bytes with metadata written: 504987648 00:08:15.652 Media bytes erased: 0 00:08:15.652 00:08:15.652 FDP events log page 00:08:15.652 =================== 00:08:15.652 Number of FDP events: 0 00:08:15.652 00:08:15.652 NVM Specific Namespace Data 00:08:15.652 =========================== 00:08:15.652 Logical Block Storage Tag Mask: 0 00:08:15.652 Protection Information Capabilities: 00:08:15.652 16b Guard Protection Information Storage Tag Support: No 00:08:15.652 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:15.652 Storage Tag Check Read Support: No 00:08:15.652 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.652 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.652 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.652 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.652 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.652 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.652 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.652 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.652 ===================================================== 00:08:15.652 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:15.652 ===================================================== 00:08:15.652 Controller Capabilities/Features 00:08:15.652 ================================ 00:08:15.652 Vendor ID: 1b36 00:08:15.652 Subsystem Vendor ID: 1af4 00:08:15.652 Serial Number: 12340 00:08:15.652 Model Number: QEMU NVMe Ctrl 00:08:15.652 Firmware Version: 8.0.0 00:08:15.652 Recommended Arb Burst: 6 00:08:15.652 IEEE OUI Identifier: 00 54 52 00:08:15.652 Multi-path I/O 00:08:15.652 May have multiple subsystem ports: No 00:08:15.652 May have multiple controllers: No 00:08:15.652 Associated with SR-IOV VF: No 00:08:15.652 Max Data Transfer Size: 524288 00:08:15.652 Max Number of Namespaces: 256 00:08:15.652 Max Number of I/O Queues: 64 00:08:15.652 NVMe Specification Version (VS): 1.4 00:08:15.652 NVMe Specification Version (Identify): 1.4 00:08:15.652 Maximum Queue Entries: 2048 00:08:15.652 Contiguous Queues Required: Yes 00:08:15.652 Arbitration Mechanisms Supported 00:08:15.652 Weighted Round Robin: Not Supported 00:08:15.652 Vendor Specific: Not Supported 00:08:15.652 Reset Timeout: 7500 ms 00:08:15.652 Doorbell Stride: 4 bytes 00:08:15.652 NVM Subsystem Reset: Not Supported 00:08:15.652 Command Sets Supported 00:08:15.652 NVM Command Set: Supported 00:08:15.652 Boot Partition: Not Supported 00:08:15.652 Memory Page Size Minimum: 4096 bytes 00:08:15.652 Memory Page Size Maximum: 65536 bytes 00:08:15.653 Persistent Memory Region: Not Supported 00:08:15.653 Optional Asynchronous Events Supported 00:08:15.653 Namespace Attribute Notices: Supported 00:08:15.653 Firmware Activation Notices: Not Supported 00:08:15.653 ANA Change Notices: Not Supported 00:08:15.653 PLE Aggregate Log Change Notices: Not Supported 00:08:15.653 LBA Status Info Alert Notices: Not Supported 00:08:15.653 EGE Aggregate Log Change Notices: Not Supported 00:08:15.653 Normal NVM Subsystem Shutdown event: Not Supported 00:08:15.653 Zone Descriptor Change Notices: Not Supported 00:08:15.653 Discovery Log Change Notices: Not Supported 00:08:15.653 Controller Attributes 00:08:15.653 128-bit Host Identifier: Not Supported 00:08:15.653 Non-Operational Permissive Mode: Not Supported 00:08:15.653 NVM Sets: Not Supported 00:08:15.653 Read Recovery Levels: Not Supported 00:08:15.653 Endurance Groups: Not Supported 00:08:15.653 Predictable Latency Mode: Not Supported 00:08:15.653 Traffic Based Keep ALive: Not Supported 00:08:15.653 Namespace Granularity: Not Supported 00:08:15.653 SQ Associations: Not Supported 00:08:15.653 UUID List: Not Supported 00:08:15.653 Multi-Domain Subsystem: Not Supported 00:08:15.653 Fixed Capacity Management: Not Supported 00:08:15.653 Variable Capacity Management: Not Supported 00:08:15.653 Delete Endurance Group: Not Supported 00:08:15.653 Delete NVM Set: Not Supported 00:08:15.653 Extended LBA Formats Supported: Supported 00:08:15.653 Flexible Data Placement Supported: Not Supported 00:08:15.653 00:08:15.653 Controller Memory Buffer Support 00:08:15.653 ================================ 00:08:15.653 Supported: No 00:08:15.653 00:08:15.653 Persistent Memory Region Support 00:08:15.653 ================================ 00:08:15.653 Supported: No 00:08:15.653 00:08:15.653 Admin Command Set Attributes 00:08:15.653 ============================ 00:08:15.653 Security Send/Receive: Not Supported 00:08:15.653 Format NVM: Supported 00:08:15.653 Firmware Activate/Download: Not Supported 00:08:15.653 Namespace Management: Supported 00:08:15.653 Device Self-Test: Not Supported 00:08:15.653 Directives: Supported 00:08:15.653 NVMe-MI: Not Supported 00:08:15.653 Virtualization Management: Not Supported 00:08:15.653 Doorbell Buffer Config: Supported 00:08:15.653 Get LBA Status Capability: Not Supported 00:08:15.653 Command & Feature Lockdown Capability: Not Supported 00:08:15.653 Abort Command Limit: 4 00:08:15.653 Async Event Request Limit: 4 00:08:15.653 Number of Firmware Slots: N/A 00:08:15.653 Firmware Slot 1 Read-Only: N/A 00:08:15.653 Firmware Activation Without Reset: N/A 00:08:15.653 Multiple Update Detection Support: N/A 00:08:15.653 Firmware Update Granularity: No Information Provided 00:08:15.653 Per-Namespace SMART Log: Yes 00:08:15.653 Asymmetric Namespace Access Log Page: Not Supported 00:08:15.653 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:15.653 Command Effects Log Page: Supported 00:08:15.653 Get Log Page Extended Data: Supported 00:08:15.653 Telemetry Log Pages: Not Supported 00:08:15.653 Persistent Event Log Pages: Not Supported 00:08:15.653 Supported Log Pages Log Page: May Support 00:08:15.653 Commands Supported & Effects Log Page: Not Supported 00:08:15.653 Feature Identifiers & Effects Log Page:May Support 00:08:15.653 NVMe-MI Commands & Effects Log Page: May Support 00:08:15.653 Data Area 4 for Telemetry Log: Not Supported 00:08:15.653 Error Log Page Entries Supported: 1 00:08:15.653 Keep Alive: Not Supported 00:08:15.653 00:08:15.653 NVM Command Set Attributes 00:08:15.653 ========================== 00:08:15.653 Submission Queue Entry Size 00:08:15.653 Max: 64 00:08:15.653 Min: 64 00:08:15.653 Completion Queue Entry Size 00:08:15.653 Max: 16 00:08:15.653 Min: 16 00:08:15.653 Number of Namespaces: 256 00:08:15.653 Compare Command: Supported 00:08:15.653 Write Uncorrectable Command: Not Supported 00:08:15.653 Dataset Management Command: Supported 00:08:15.653 Write Zeroes Command: Supported 00:08:15.653 Set Features Save Field: Supported 00:08:15.653 Reservations: Not Supported 00:08:15.653 Timestamp: Supported 00:08:15.653 Copy: Supported 00:08:15.653 Volatile Write Cache: Present 00:08:15.653 Atomic Write Unit (Normal): 1 00:08:15.653 Atomic Write Unit (PFail): 1 00:08:15.653 Atomic Compare & Write Unit: 1 00:08:15.653 Fused Compare & Write: Not Supported 00:08:15.653 Scatter-Gather List 00:08:15.653 SGL Command Set: Supported 00:08:15.653 SGL Keyed: Not Supported 00:08:15.653 SGL Bit Bucket Descriptor: Not Supported 00:08:15.653 SGL Metadata Pointer: Not Supported 00:08:15.653 Oversized SGL: Not Supported 00:08:15.653 SGL Metadata Address: Not Supported 00:08:15.653 SGL Offset: Not Supported 00:08:15.653 Transport SGL Data Block: Not Supported 00:08:15.653 Replay Protected Memory Block: Not Supported 00:08:15.653 00:08:15.653 Firmware Slot Information 00:08:15.653 ========================= 00:08:15.653 Active slot: 1 00:08:15.653 Slot 1 Firmware Revision: 1.0 00:08:15.653 00:08:15.653 00:08:15.653 Commands Supported and Effects 00:08:15.653 ============================== 00:08:15.653 Admin Commands 00:08:15.653 -------------- 00:08:15.653 Delete I/O Submission Queue (00h): Supported 00:08:15.653 Create I/O Submission Queue (01h): Supported 00:08:15.653 Get Log Page (02h): Supported 00:08:15.653 Delete I/O Completion Queue (04h): Supported 00:08:15.653 Create I/O Completion Queue (05h): Supported 00:08:15.653 Identify (06h): Supported 00:08:15.653 Abort (08h): Supported 00:08:15.653 Set Features (09h): Supported 00:08:15.653 Get Features (0Ah): Supported 00:08:15.653 Asynchronous Event Request (0Ch): Supported 00:08:15.653 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:15.653 Directive Send (19h): Supported 00:08:15.653 Directive Receive (1Ah): Supported 00:08:15.653 Virtualization Management (1Ch): Supported 00:08:15.653 Doorbell Buffer Config (7Ch): Supported 00:08:15.653 Format NVM (80h): Supported LBA-Change 00:08:15.653 I/O Commands 00:08:15.653 ------------ 00:08:15.653 Flush (00h): Supported LBA-Change 00:08:15.653 Write (01h): Supported LBA-Change 00:08:15.653 Read (02h): Supported 00:08:15.653 Compare (05h): Supported 00:08:15.653 Write Zeroes (08h): Supported LBA-Change 00:08:15.653 Dataset Management (09h): Supported LBA-Change 00:08:15.653 Unknown (0Ch): Supported 00:08:15.653 Unknown (12h): Supported 00:08:15.653 Copy (19h): Supported LBA-Change 00:08:15.653 Unknown (1Dh): Supported LBA-Change 00:08:15.653 00:08:15.653 Error Log 00:08:15.653 ========= 00:08:15.653 00:08:15.653 Arbitration 00:08:15.653 =========== 00:08:15.653 Arbitration Burst: no limit 00:08:15.653 00:08:15.653 Power Management 00:08:15.653 ================ 00:08:15.653 Number of Power States: 1 00:08:15.653 Current Power State: Power State #0 00:08:15.653 Power State #0: 00:08:15.653 Max Power: 25.00 W 00:08:15.653 Non-Operational State: Operational 00:08:15.653 Entry Latency: 16 microseconds 00:08:15.653 Exit Latency: 4 microseconds 00:08:15.653 Relative Read Throughput: 0 00:08:15.653 Relative Read Latency: 0 00:08:15.653 Relative Write Throughput: 0 00:08:15.653 Relative Write Latency: 0 00:08:15.653 Idle Power: Not Reported 00:08:15.653 Active Power: Not Reported 00:08:15.654 Non-Operational Permissive Mode: Not Supported 00:08:15.654 00:08:15.654 Health Information 00:08:15.654 ================== 00:08:15.654 Critical Warnings: 00:08:15.654 Available Spare Space: OK 00:08:15.654 Temperature: OK 00:08:15.654 Device Reliability: OK 00:08:15.654 Read Only: No 00:08:15.654 Volatile Memory Backup: OK 00:08:15.654 Current Temperature: 323 Kelvin (50 Celsius) 00:08:15.654 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:15.654 Available Spare: 0% 00:08:15.654 Available Spare Threshold: 0% 00:08:15.654 Life Percentage Used: 0% 00:08:15.654 Data Units Read: 696 00:08:15.654 Data Units Written: 624 00:08:15.654 Host Read Commands: 37919 00:08:15.654 Host Write Commands: 37705 00:08:15.654 Controller Busy Time: 0 minutes 00:08:15.654 Power Cycles: 0 00:08:15.654 Power On Hours: 0 hours 00:08:15.654 Unsafe Shutdowns: 0 00:08:15.654 Unrecoverable Media Errors: 0 00:08:15.654 Lifetime Error Log Entries: 0 00:08:15.654 Warning Temperature Time: 0 minutes 00:08:15.654 Critical Temperature Time: 0 minutes 00:08:15.654 00:08:15.654 Number of Queues 00:08:15.654 ================ 00:08:15.654 Number of I/O Submission Queues: 64 00:08:15.654 Number of I/O Completion Queues: 64 00:08:15.654 00:08:15.654 ZNS Specific Controller Data 00:08:15.654 ============================ 00:08:15.654 Zone Append Size Limit: 0 00:08:15.654 00:08:15.654 00:08:15.654 Active Namespaces 00:08:15.654 ================= 00:08:15.654 Namespace ID:1 00:08:15.654 Error Recovery Timeout: Unlimited 00:08:15.654 Command Set Identifier: NVM (00h) 00:08:15.654 Deallocate: Supported 00:08:15.654 Deallocated/Unwritten Error: Supported 00:08:15.654 Deallocated Read Value: All 0x00 00:08:15.654 Deallocate in Write Zeroes: Not Supported 00:08:15.654 Deallocated Guard Field: 0xFFFF 00:08:15.654 Flush: Supported 00:08:15.654 Reservation: Not Supported 00:08:15.654 Metadata Transferred as: Separate Metadata Buffer 00:08:15.654 Namespace Sharing Capabilities: Private 00:08:15.654 Size (in LBAs): 1548666 (5GiB) 00:08:15.654 Capacity (in LBAs): 1548666 (5GiB) 00:08:15.654 Utilization (in LBAs): 1548666 (5GiB) 00:08:15.654 Thin Provisioning: Not Supported 00:08:15.654 Per-NS Atomic Units: No 00:08:15.654 Maximum Single Source Range Length: 128 00:08:15.654 Maximum Copy Length: 128 00:08:15.654 Maximum Source Range Count: 128 00:08:15.654 NGUID/EUI64 Never Reused: No 00:08:15.654 Namespace Write Protected: No 00:08:15.654 Number of LBA Formats: 8 00:08:15.654 Current LBA Format: LBA Format #07 00:08:15.654 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:15.654 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:15.654 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:15.654 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:15.654 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:15.654 LBA Forma[2024-11-18 06:42:08.518059] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 74668 terminated unexpected 00:08:15.654 t #05: Data Size: 4096 Metadata Size: 8 00:08:15.654 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:15.654 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:15.654 00:08:15.654 NVM Specific Namespace Data 00:08:15.654 =========================== 00:08:15.654 Logical Block Storage Tag Mask: 0 00:08:15.654 Protection Information Capabilities: 00:08:15.654 16b Guard Protection Information Storage Tag Support: No 00:08:15.654 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:15.654 Storage Tag Check Read Support: No 00:08:15.654 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.654 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.654 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.654 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.654 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.654 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.654 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.654 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.654 ===================================================== 00:08:15.654 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:15.654 ===================================================== 00:08:15.654 Controller Capabilities/Features 00:08:15.654 ================================ 00:08:15.654 Vendor ID: 1b36 00:08:15.654 Subsystem Vendor ID: 1af4 00:08:15.654 Serial Number: 12342 00:08:15.654 Model Number: QEMU NVMe Ctrl 00:08:15.654 Firmware Version: 8.0.0 00:08:15.654 Recommended Arb Burst: 6 00:08:15.654 IEEE OUI Identifier: 00 54 52 00:08:15.654 Multi-path I/O 00:08:15.654 May have multiple subsystem ports: No 00:08:15.654 May have multiple controllers: No 00:08:15.654 Associated with SR-IOV VF: No 00:08:15.654 Max Data Transfer Size: 524288 00:08:15.654 Max Number of Namespaces: 256 00:08:15.654 Max Number of I/O Queues: 64 00:08:15.654 NVMe Specification Version (VS): 1.4 00:08:15.654 NVMe Specification Version (Identify): 1.4 00:08:15.654 Maximum Queue Entries: 2048 00:08:15.654 Contiguous Queues Required: Yes 00:08:15.654 Arbitration Mechanisms Supported 00:08:15.654 Weighted Round Robin: Not Supported 00:08:15.654 Vendor Specific: Not Supported 00:08:15.654 Reset Timeout: 7500 ms 00:08:15.654 Doorbell Stride: 4 bytes 00:08:15.654 NVM Subsystem Reset: Not Supported 00:08:15.654 Command Sets Supported 00:08:15.654 NVM Command Set: Supported 00:08:15.654 Boot Partition: Not Supported 00:08:15.654 Memory Page Size Minimum: 4096 bytes 00:08:15.654 Memory Page Size Maximum: 65536 bytes 00:08:15.654 Persistent Memory Region: Not Supported 00:08:15.654 Optional Asynchronous Events Supported 00:08:15.654 Namespace Attribute Notices: Supported 00:08:15.654 Firmware Activation Notices: Not Supported 00:08:15.654 ANA Change Notices: Not Supported 00:08:15.654 PLE Aggregate Log Change Notices: Not Supported 00:08:15.654 LBA Status Info Alert Notices: Not Supported 00:08:15.654 EGE Aggregate Log Change Notices: Not Supported 00:08:15.654 Normal NVM Subsystem Shutdown event: Not Supported 00:08:15.654 Zone Descriptor Change Notices: Not Supported 00:08:15.654 Discovery Log Change Notices: Not Supported 00:08:15.654 Controller Attributes 00:08:15.654 128-bit Host Identifier: Not Supported 00:08:15.654 Non-Operational Permissive Mode: Not Supported 00:08:15.654 NVM Sets: Not Supported 00:08:15.654 Read Recovery Levels: Not Supported 00:08:15.654 Endurance Groups: Not Supported 00:08:15.654 Predictable Latency Mode: Not Supported 00:08:15.654 Traffic Based Keep ALive: Not Supported 00:08:15.654 Namespace Granularity: Not Supported 00:08:15.654 SQ Associations: Not Supported 00:08:15.654 UUID List: Not Supported 00:08:15.654 Multi-Domain Subsystem: Not Supported 00:08:15.654 Fixed Capacity Management: Not Supported 00:08:15.654 Variable Capacity Management: Not Supported 00:08:15.654 Delete Endurance Group: Not Supported 00:08:15.654 Delete NVM Set: Not Supported 00:08:15.654 Extended LBA Formats Supported: Supported 00:08:15.654 Flexible Data Placement Supported: Not Supported 00:08:15.654 00:08:15.654 Controller Memory Buffer Support 00:08:15.654 ================================ 00:08:15.654 Supported: No 00:08:15.654 00:08:15.654 Persistent Memory Region Support 00:08:15.654 ================================ 00:08:15.655 Supported: No 00:08:15.655 00:08:15.655 Admin Command Set Attributes 00:08:15.655 ============================ 00:08:15.655 Security Send/Receive: Not Supported 00:08:15.655 Format NVM: Supported 00:08:15.655 Firmware Activate/Download: Not Supported 00:08:15.655 Namespace Management: Supported 00:08:15.655 Device Self-Test: Not Supported 00:08:15.655 Directives: Supported 00:08:15.655 NVMe-MI: Not Supported 00:08:15.655 Virtualization Management: Not Supported 00:08:15.655 Doorbell Buffer Config: Supported 00:08:15.655 Get LBA Status Capability: Not Supported 00:08:15.655 Command & Feature Lockdown Capability: Not Supported 00:08:15.655 Abort Command Limit: 4 00:08:15.655 Async Event Request Limit: 4 00:08:15.655 Number of Firmware Slots: N/A 00:08:15.655 Firmware Slot 1 Read-Only: N/A 00:08:15.655 Firmware Activation Without Reset: N/A 00:08:15.655 Multiple Update Detection Support: N/A 00:08:15.655 Firmware Update Granularity: No Information Provided 00:08:15.655 Per-Namespace SMART Log: Yes 00:08:15.655 Asymmetric Namespace Access Log Page: Not Supported 00:08:15.655 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:15.655 Command Effects Log Page: Supported 00:08:15.655 Get Log Page Extended Data: Supported 00:08:15.655 Telemetry Log Pages: Not Supported 00:08:15.655 Persistent Event Log Pages: Not Supported 00:08:15.655 Supported Log Pages Log Page: May Support 00:08:15.655 Commands Supported & Effects Log Page: Not Supported 00:08:15.655 Feature Identifiers & Effects Log Page:May Support 00:08:15.655 NVMe-MI Commands & Effects Log Page: May Support 00:08:15.655 Data Area 4 for Telemetry Log: Not Supported 00:08:15.655 Error Log Page Entries Supported: 1 00:08:15.655 Keep Alive: Not Supported 00:08:15.655 00:08:15.655 NVM Command Set Attributes 00:08:15.655 ========================== 00:08:15.655 Submission Queue Entry Size 00:08:15.655 Max: 64 00:08:15.655 Min: 64 00:08:15.655 Completion Queue Entry Size 00:08:15.655 Max: 16 00:08:15.655 Min: 16 00:08:15.655 Number of Namespaces: 256 00:08:15.655 Compare Command: Supported 00:08:15.655 Write Uncorrectable Command: Not Supported 00:08:15.655 Dataset Management Command: Supported 00:08:15.655 Write Zeroes Command: Supported 00:08:15.655 Set Features Save Field: Supported 00:08:15.655 Reservations: Not Supported 00:08:15.655 Timestamp: Supported 00:08:15.655 Copy: Supported 00:08:15.655 Volatile Write Cache: Present 00:08:15.655 Atomic Write Unit (Normal): 1 00:08:15.655 Atomic Write Unit (PFail): 1 00:08:15.655 Atomic Compare & Write Unit: 1 00:08:15.655 Fused Compare & Write: Not Supported 00:08:15.655 Scatter-Gather List 00:08:15.655 SGL Command Set: Supported 00:08:15.655 SGL Keyed: Not Supported 00:08:15.655 SGL Bit Bucket Descriptor: Not Supported 00:08:15.655 SGL Metadata Pointer: Not Supported 00:08:15.655 Oversized SGL: Not Supported 00:08:15.655 SGL Metadata Address: Not Supported 00:08:15.655 SGL Offset: Not Supported 00:08:15.655 Transport SGL Data Block: Not Supported 00:08:15.655 Replay Protected Memory Block: Not Supported 00:08:15.655 00:08:15.655 Firmware Slot Information 00:08:15.655 ========================= 00:08:15.655 Active slot: 1 00:08:15.655 Slot 1 Firmware Revision: 1.0 00:08:15.655 00:08:15.655 00:08:15.655 Commands Supported and Effects 00:08:15.655 ============================== 00:08:15.655 Admin Commands 00:08:15.655 -------------- 00:08:15.655 Delete I/O Submission Queue (00h): Supported 00:08:15.655 Create I/O Submission Queue (01h): Supported 00:08:15.655 Get Log Page (02h): Supported 00:08:15.655 Delete I/O Completion Queue (04h): Supported 00:08:15.655 Create I/O Completion Queue (05h): Supported 00:08:15.655 Identify (06h): Supported 00:08:15.655 Abort (08h): Supported 00:08:15.655 Set Features (09h): Supported 00:08:15.655 Get Features (0Ah): Supported 00:08:15.655 Asynchronous Event Request (0Ch): Supported 00:08:15.655 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:15.655 Directive Send (19h): Supported 00:08:15.655 Directive Receive (1Ah): Supported 00:08:15.655 Virtualization Management (1Ch): Supported 00:08:15.655 Doorbell Buffer Config (7Ch): Supported 00:08:15.655 Format NVM (80h): Supported LBA-Change 00:08:15.655 I/O Commands 00:08:15.655 ------------ 00:08:15.655 Flush (00h): Supported LBA-Change 00:08:15.655 Write (01h): Supported LBA-Change 00:08:15.655 Read (02h): Supported 00:08:15.655 Compare (05h): Supported 00:08:15.655 Write Zeroes (08h): Supported LBA-Change 00:08:15.655 Dataset Management (09h): Supported LBA-Change 00:08:15.655 Unknown (0Ch): Supported 00:08:15.655 Unknown (12h): Supported 00:08:15.655 Copy (19h): Supported LBA-Change 00:08:15.655 Unknown (1Dh): Supported LBA-Change 00:08:15.655 00:08:15.655 Error Log 00:08:15.655 ========= 00:08:15.655 00:08:15.655 Arbitration 00:08:15.655 =========== 00:08:15.655 Arbitration Burst: no limit 00:08:15.655 00:08:15.655 Power Management 00:08:15.655 ================ 00:08:15.655 Number of Power States: 1 00:08:15.655 Current Power State: Power State #0 00:08:15.655 Power State #0: 00:08:15.655 Max Power: 25.00 W 00:08:15.655 Non-Operational State: Operational 00:08:15.655 Entry Latency: 16 microseconds 00:08:15.655 Exit Latency: 4 microseconds 00:08:15.655 Relative Read Throughput: 0 00:08:15.655 Relative Read Latency: 0 00:08:15.655 Relative Write Throughput: 0 00:08:15.655 Relative Write Latency: 0 00:08:15.655 Idle Power: Not Reported 00:08:15.655 Active Power: Not Reported 00:08:15.655 Non-Operational Permissive Mode: Not Supported 00:08:15.655 00:08:15.655 Health Information 00:08:15.655 ================== 00:08:15.655 Critical Warnings: 00:08:15.655 Available Spare Space: OK 00:08:15.655 Temperature: OK 00:08:15.655 Device Reliability: OK 00:08:15.655 Read Only: No 00:08:15.655 Volatile Memory Backup: OK 00:08:15.655 Current Temperature: 323 Kelvin (50 Celsius) 00:08:15.655 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:15.655 Available Spare: 0% 00:08:15.655 Available Spare Threshold: 0% 00:08:15.655 Life Percentage Used: 0% 00:08:15.655 Data Units Read: 2297 00:08:15.655 Data Units Written: 2084 00:08:15.655 Host Read Commands: 116285 00:08:15.655 Host Write Commands: 114554 00:08:15.655 Controller Busy Time: 0 minutes 00:08:15.655 Power Cycles: 0 00:08:15.655 Power On Hours: 0 hours 00:08:15.655 Unsafe Shutdowns: 0 00:08:15.655 Unrecoverable Media Errors: 0 00:08:15.655 Lifetime Error Log Entries: 0 00:08:15.655 Warning Temperature Time: 0 minutes 00:08:15.655 Critical Temperature Time: 0 minutes 00:08:15.655 00:08:15.655 Number of Queues 00:08:15.655 ================ 00:08:15.655 Number of I/O Submission Queues: 64 00:08:15.655 Number of I/O Completion Queues: 64 00:08:15.655 00:08:15.655 ZNS Specific Controller Data 00:08:15.655 ============================ 00:08:15.655 Zone Append Size Limit: 0 00:08:15.655 00:08:15.655 00:08:15.655 Active Namespaces 00:08:15.655 ================= 00:08:15.655 Namespace ID:1 00:08:15.655 Error Recovery Timeout: Unlimited 00:08:15.655 Command Set Identifier: NVM (00h) 00:08:15.655 Deallocate: Supported 00:08:15.655 Deallocated/Unwritten Error: Supported 00:08:15.655 Deallocated Read Value: All 0x00 00:08:15.655 Deallocate in Write Zeroes: Not Supported 00:08:15.655 Deallocated Guard Field: 0xFFFF 00:08:15.655 Flush: Supported 00:08:15.655 Reservation: Not Supported 00:08:15.655 Namespace Sharing Capabilities: Private 00:08:15.655 Size (in LBAs): 1048576 (4GiB) 00:08:15.655 Capacity (in LBAs): 1048576 (4GiB) 00:08:15.655 Utilization (in LBAs): 1048576 (4GiB) 00:08:15.655 Thin Provisioning: Not Supported 00:08:15.655 Per-NS Atomic Units: No 00:08:15.655 Maximum Single Source Range Length: 128 00:08:15.655 Maximum Copy Length: 128 00:08:15.655 Maximum Source Range Count: 128 00:08:15.655 NGUID/EUI64 Never Reused: No 00:08:15.655 Namespace Write Protected: No 00:08:15.655 Number of LBA Formats: 8 00:08:15.655 Current LBA Format: LBA Format #04 00:08:15.655 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:15.655 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:15.655 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:15.655 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:15.655 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:15.655 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:15.655 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:15.655 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:15.655 00:08:15.655 NVM Specific Namespace Data 00:08:15.655 =========================== 00:08:15.655 Logical Block Storage Tag Mask: 0 00:08:15.655 Protection Information Capabilities: 00:08:15.656 16b Guard Protection Information Storage Tag Support: No 00:08:15.656 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:15.656 Storage Tag Check Read Support: No 00:08:15.656 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.656 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.656 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.656 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.656 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.656 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.656 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.656 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.656 Namespace ID:2 00:08:15.656 Error Recovery Timeout: Unlimited 00:08:15.656 Command Set Identifier: NVM (00h) 00:08:15.656 Deallocate: Supported 00:08:15.656 Deallocated/Unwritten Error: Supported 00:08:15.656 Deallocated Read Value: All 0x00 00:08:15.656 Deallocate in Write Zeroes: Not Supported 00:08:15.656 Deallocated Guard Field: 0xFFFF 00:08:15.656 Flush: Supported 00:08:15.656 Reservation: Not Supported 00:08:15.656 Namespace Sharing Capabilities: Private 00:08:15.656 Size (in LBAs): 1048576 (4GiB) 00:08:15.656 Capacity (in LBAs): 1048576 (4GiB) 00:08:15.656 Utilization (in LBAs): 1048576 (4GiB) 00:08:15.656 Thin Provisioning: Not Supported 00:08:15.656 Per-NS Atomic Units: No 00:08:15.656 Maximum Single Source Range Length: 128 00:08:15.656 Maximum Copy Length: 128 00:08:15.656 Maximum Source Range Count: 128 00:08:15.656 NGUID/EUI64 Never Reused: No 00:08:15.656 Namespace Write Protected: No 00:08:15.656 Number of LBA Formats: 8 00:08:15.656 Current LBA Format: LBA Format #04 00:08:15.656 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:15.656 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:15.656 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:15.656 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:15.656 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:15.656 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:15.656 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:15.656 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:15.656 00:08:15.656 NVM Specific Namespace Data 00:08:15.656 =========================== 00:08:15.656 Logical Block Storage Tag Mask: 0 00:08:15.656 Protection Information Capabilities: 00:08:15.656 16b Guard Protection Information Storage Tag Support: No 00:08:15.656 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:15.656 Storage Tag Check Read Support: No 00:08:15.656 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.656 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.656 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.656 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.656 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.656 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.656 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.656 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.656 Namespace ID:3 00:08:15.656 Error Recovery Timeout: Unlimited 00:08:15.656 Command Set Identifier: NVM (00h) 00:08:15.656 Deallocate: Supported 00:08:15.656 Deallocated/Unwritten Error: Supported 00:08:15.656 Deallocated Read Value: All 0x00 00:08:15.656 Deallocate in Write Zeroes: Not Supported 00:08:15.656 Deallocated Guard Field: 0xFFFF 00:08:15.656 Flush: Supported 00:08:15.656 Reservation: Not Supported 00:08:15.656 Namespace Sharing Capabilities: Private 00:08:15.656 Size (in LBAs): 1048576 (4GiB) 00:08:15.656 Capacity (in LBAs): 1048576 (4GiB) 00:08:15.656 Utilization (in LBAs): 1048576 (4GiB) 00:08:15.656 Thin Provisioning: Not Supported 00:08:15.656 Per-NS Atomic Units: No 00:08:15.656 Maximum Single Source Range Length: 128 00:08:15.656 Maximum Copy Length: 128 00:08:15.656 Maximum Source Range Count: 128 00:08:15.656 NGUID/EUI64 Never Reused: No 00:08:15.656 Namespace Write Protected: No 00:08:15.656 Number of LBA Formats: 8 00:08:15.656 Current LBA Format: LBA Format #04 00:08:15.656 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:15.656 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:15.656 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:15.656 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:15.656 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:15.656 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:15.656 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:15.656 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:15.656 00:08:15.656 NVM Specific Namespace Data 00:08:15.656 =========================== 00:08:15.656 Logical Block Storage Tag Mask: 0 00:08:15.656 Protection Information Capabilities: 00:08:15.656 16b Guard Protection Information Storage Tag Support: No 00:08:15.656 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:15.656 Storage Tag Check Read Support: No 00:08:15.656 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.656 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.656 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.656 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.656 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.656 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.656 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.656 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.656 06:42:08 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:15.656 06:42:08 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:15.919 ===================================================== 00:08:15.919 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:15.919 ===================================================== 00:08:15.919 Controller Capabilities/Features 00:08:15.919 ================================ 00:08:15.919 Vendor ID: 1b36 00:08:15.919 Subsystem Vendor ID: 1af4 00:08:15.919 Serial Number: 12340 00:08:15.919 Model Number: QEMU NVMe Ctrl 00:08:15.919 Firmware Version: 8.0.0 00:08:15.919 Recommended Arb Burst: 6 00:08:15.919 IEEE OUI Identifier: 00 54 52 00:08:15.919 Multi-path I/O 00:08:15.919 May have multiple subsystem ports: No 00:08:15.919 May have multiple controllers: No 00:08:15.919 Associated with SR-IOV VF: No 00:08:15.919 Max Data Transfer Size: 524288 00:08:15.919 Max Number of Namespaces: 256 00:08:15.919 Max Number of I/O Queues: 64 00:08:15.919 NVMe Specification Version (VS): 1.4 00:08:15.919 NVMe Specification Version (Identify): 1.4 00:08:15.919 Maximum Queue Entries: 2048 00:08:15.919 Contiguous Queues Required: Yes 00:08:15.919 Arbitration Mechanisms Supported 00:08:15.919 Weighted Round Robin: Not Supported 00:08:15.919 Vendor Specific: Not Supported 00:08:15.919 Reset Timeout: 7500 ms 00:08:15.919 Doorbell Stride: 4 bytes 00:08:15.919 NVM Subsystem Reset: Not Supported 00:08:15.919 Command Sets Supported 00:08:15.919 NVM Command Set: Supported 00:08:15.919 Boot Partition: Not Supported 00:08:15.919 Memory Page Size Minimum: 4096 bytes 00:08:15.919 Memory Page Size Maximum: 65536 bytes 00:08:15.919 Persistent Memory Region: Not Supported 00:08:15.919 Optional Asynchronous Events Supported 00:08:15.919 Namespace Attribute Notices: Supported 00:08:15.919 Firmware Activation Notices: Not Supported 00:08:15.919 ANA Change Notices: Not Supported 00:08:15.919 PLE Aggregate Log Change Notices: Not Supported 00:08:15.919 LBA Status Info Alert Notices: Not Supported 00:08:15.919 EGE Aggregate Log Change Notices: Not Supported 00:08:15.919 Normal NVM Subsystem Shutdown event: Not Supported 00:08:15.919 Zone Descriptor Change Notices: Not Supported 00:08:15.919 Discovery Log Change Notices: Not Supported 00:08:15.919 Controller Attributes 00:08:15.919 128-bit Host Identifier: Not Supported 00:08:15.919 Non-Operational Permissive Mode: Not Supported 00:08:15.919 NVM Sets: Not Supported 00:08:15.919 Read Recovery Levels: Not Supported 00:08:15.919 Endurance Groups: Not Supported 00:08:15.919 Predictable Latency Mode: Not Supported 00:08:15.919 Traffic Based Keep ALive: Not Supported 00:08:15.919 Namespace Granularity: Not Supported 00:08:15.919 SQ Associations: Not Supported 00:08:15.919 UUID List: Not Supported 00:08:15.919 Multi-Domain Subsystem: Not Supported 00:08:15.919 Fixed Capacity Management: Not Supported 00:08:15.919 Variable Capacity Management: Not Supported 00:08:15.919 Delete Endurance Group: Not Supported 00:08:15.919 Delete NVM Set: Not Supported 00:08:15.919 Extended LBA Formats Supported: Supported 00:08:15.919 Flexible Data Placement Supported: Not Supported 00:08:15.919 00:08:15.919 Controller Memory Buffer Support 00:08:15.919 ================================ 00:08:15.919 Supported: No 00:08:15.919 00:08:15.919 Persistent Memory Region Support 00:08:15.919 ================================ 00:08:15.919 Supported: No 00:08:15.919 00:08:15.919 Admin Command Set Attributes 00:08:15.919 ============================ 00:08:15.919 Security Send/Receive: Not Supported 00:08:15.919 Format NVM: Supported 00:08:15.919 Firmware Activate/Download: Not Supported 00:08:15.920 Namespace Management: Supported 00:08:15.920 Device Self-Test: Not Supported 00:08:15.920 Directives: Supported 00:08:15.920 NVMe-MI: Not Supported 00:08:15.920 Virtualization Management: Not Supported 00:08:15.920 Doorbell Buffer Config: Supported 00:08:15.920 Get LBA Status Capability: Not Supported 00:08:15.920 Command & Feature Lockdown Capability: Not Supported 00:08:15.920 Abort Command Limit: 4 00:08:15.920 Async Event Request Limit: 4 00:08:15.920 Number of Firmware Slots: N/A 00:08:15.920 Firmware Slot 1 Read-Only: N/A 00:08:15.920 Firmware Activation Without Reset: N/A 00:08:15.920 Multiple Update Detection Support: N/A 00:08:15.920 Firmware Update Granularity: No Information Provided 00:08:15.920 Per-Namespace SMART Log: Yes 00:08:15.920 Asymmetric Namespace Access Log Page: Not Supported 00:08:15.920 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:15.920 Command Effects Log Page: Supported 00:08:15.920 Get Log Page Extended Data: Supported 00:08:15.920 Telemetry Log Pages: Not Supported 00:08:15.920 Persistent Event Log Pages: Not Supported 00:08:15.920 Supported Log Pages Log Page: May Support 00:08:15.920 Commands Supported & Effects Log Page: Not Supported 00:08:15.920 Feature Identifiers & Effects Log Page:May Support 00:08:15.920 NVMe-MI Commands & Effects Log Page: May Support 00:08:15.920 Data Area 4 for Telemetry Log: Not Supported 00:08:15.920 Error Log Page Entries Supported: 1 00:08:15.920 Keep Alive: Not Supported 00:08:15.920 00:08:15.920 NVM Command Set Attributes 00:08:15.920 ========================== 00:08:15.920 Submission Queue Entry Size 00:08:15.920 Max: 64 00:08:15.920 Min: 64 00:08:15.920 Completion Queue Entry Size 00:08:15.920 Max: 16 00:08:15.920 Min: 16 00:08:15.920 Number of Namespaces: 256 00:08:15.920 Compare Command: Supported 00:08:15.920 Write Uncorrectable Command: Not Supported 00:08:15.920 Dataset Management Command: Supported 00:08:15.920 Write Zeroes Command: Supported 00:08:15.920 Set Features Save Field: Supported 00:08:15.920 Reservations: Not Supported 00:08:15.920 Timestamp: Supported 00:08:15.920 Copy: Supported 00:08:15.920 Volatile Write Cache: Present 00:08:15.920 Atomic Write Unit (Normal): 1 00:08:15.920 Atomic Write Unit (PFail): 1 00:08:15.920 Atomic Compare & Write Unit: 1 00:08:15.920 Fused Compare & Write: Not Supported 00:08:15.920 Scatter-Gather List 00:08:15.920 SGL Command Set: Supported 00:08:15.920 SGL Keyed: Not Supported 00:08:15.920 SGL Bit Bucket Descriptor: Not Supported 00:08:15.920 SGL Metadata Pointer: Not Supported 00:08:15.920 Oversized SGL: Not Supported 00:08:15.920 SGL Metadata Address: Not Supported 00:08:15.920 SGL Offset: Not Supported 00:08:15.920 Transport SGL Data Block: Not Supported 00:08:15.920 Replay Protected Memory Block: Not Supported 00:08:15.920 00:08:15.920 Firmware Slot Information 00:08:15.920 ========================= 00:08:15.920 Active slot: 1 00:08:15.920 Slot 1 Firmware Revision: 1.0 00:08:15.920 00:08:15.920 00:08:15.920 Commands Supported and Effects 00:08:15.920 ============================== 00:08:15.920 Admin Commands 00:08:15.920 -------------- 00:08:15.920 Delete I/O Submission Queue (00h): Supported 00:08:15.920 Create I/O Submission Queue (01h): Supported 00:08:15.920 Get Log Page (02h): Supported 00:08:15.920 Delete I/O Completion Queue (04h): Supported 00:08:15.920 Create I/O Completion Queue (05h): Supported 00:08:15.920 Identify (06h): Supported 00:08:15.920 Abort (08h): Supported 00:08:15.920 Set Features (09h): Supported 00:08:15.920 Get Features (0Ah): Supported 00:08:15.920 Asynchronous Event Request (0Ch): Supported 00:08:15.920 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:15.920 Directive Send (19h): Supported 00:08:15.920 Directive Receive (1Ah): Supported 00:08:15.920 Virtualization Management (1Ch): Supported 00:08:15.920 Doorbell Buffer Config (7Ch): Supported 00:08:15.920 Format NVM (80h): Supported LBA-Change 00:08:15.920 I/O Commands 00:08:15.920 ------------ 00:08:15.920 Flush (00h): Supported LBA-Change 00:08:15.920 Write (01h): Supported LBA-Change 00:08:15.920 Read (02h): Supported 00:08:15.920 Compare (05h): Supported 00:08:15.920 Write Zeroes (08h): Supported LBA-Change 00:08:15.920 Dataset Management (09h): Supported LBA-Change 00:08:15.920 Unknown (0Ch): Supported 00:08:15.920 Unknown (12h): Supported 00:08:15.920 Copy (19h): Supported LBA-Change 00:08:15.920 Unknown (1Dh): Supported LBA-Change 00:08:15.920 00:08:15.920 Error Log 00:08:15.920 ========= 00:08:15.920 00:08:15.920 Arbitration 00:08:15.920 =========== 00:08:15.920 Arbitration Burst: no limit 00:08:15.920 00:08:15.920 Power Management 00:08:15.920 ================ 00:08:15.920 Number of Power States: 1 00:08:15.920 Current Power State: Power State #0 00:08:15.920 Power State #0: 00:08:15.920 Max Power: 25.00 W 00:08:15.920 Non-Operational State: Operational 00:08:15.920 Entry Latency: 16 microseconds 00:08:15.920 Exit Latency: 4 microseconds 00:08:15.920 Relative Read Throughput: 0 00:08:15.920 Relative Read Latency: 0 00:08:15.920 Relative Write Throughput: 0 00:08:15.920 Relative Write Latency: 0 00:08:15.920 Idle Power: Not Reported 00:08:15.920 Active Power: Not Reported 00:08:15.920 Non-Operational Permissive Mode: Not Supported 00:08:15.920 00:08:15.920 Health Information 00:08:15.920 ================== 00:08:15.920 Critical Warnings: 00:08:15.920 Available Spare Space: OK 00:08:15.920 Temperature: OK 00:08:15.920 Device Reliability: OK 00:08:15.920 Read Only: No 00:08:15.920 Volatile Memory Backup: OK 00:08:15.920 Current Temperature: 323 Kelvin (50 Celsius) 00:08:15.920 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:15.920 Available Spare: 0% 00:08:15.920 Available Spare Threshold: 0% 00:08:15.920 Life Percentage Used: 0% 00:08:15.920 Data Units Read: 696 00:08:15.920 Data Units Written: 624 00:08:15.920 Host Read Commands: 37919 00:08:15.920 Host Write Commands: 37705 00:08:15.920 Controller Busy Time: 0 minutes 00:08:15.920 Power Cycles: 0 00:08:15.920 Power On Hours: 0 hours 00:08:15.920 Unsafe Shutdowns: 0 00:08:15.920 Unrecoverable Media Errors: 0 00:08:15.920 Lifetime Error Log Entries: 0 00:08:15.920 Warning Temperature Time: 0 minutes 00:08:15.920 Critical Temperature Time: 0 minutes 00:08:15.920 00:08:15.920 Number of Queues 00:08:15.920 ================ 00:08:15.920 Number of I/O Submission Queues: 64 00:08:15.920 Number of I/O Completion Queues: 64 00:08:15.920 00:08:15.920 ZNS Specific Controller Data 00:08:15.920 ============================ 00:08:15.920 Zone Append Size Limit: 0 00:08:15.920 00:08:15.920 00:08:15.920 Active Namespaces 00:08:15.920 ================= 00:08:15.920 Namespace ID:1 00:08:15.920 Error Recovery Timeout: Unlimited 00:08:15.920 Command Set Identifier: NVM (00h) 00:08:15.920 Deallocate: Supported 00:08:15.920 Deallocated/Unwritten Error: Supported 00:08:15.920 Deallocated Read Value: All 0x00 00:08:15.920 Deallocate in Write Zeroes: Not Supported 00:08:15.920 Deallocated Guard Field: 0xFFFF 00:08:15.920 Flush: Supported 00:08:15.920 Reservation: Not Supported 00:08:15.920 Metadata Transferred as: Separate Metadata Buffer 00:08:15.920 Namespace Sharing Capabilities: Private 00:08:15.920 Size (in LBAs): 1548666 (5GiB) 00:08:15.920 Capacity (in LBAs): 1548666 (5GiB) 00:08:15.920 Utilization (in LBAs): 1548666 (5GiB) 00:08:15.920 Thin Provisioning: Not Supported 00:08:15.920 Per-NS Atomic Units: No 00:08:15.920 Maximum Single Source Range Length: 128 00:08:15.920 Maximum Copy Length: 128 00:08:15.920 Maximum Source Range Count: 128 00:08:15.920 NGUID/EUI64 Never Reused: No 00:08:15.920 Namespace Write Protected: No 00:08:15.920 Number of LBA Formats: 8 00:08:15.920 Current LBA Format: LBA Format #07 00:08:15.920 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:15.920 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:15.920 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:15.920 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:15.920 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:15.920 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:15.920 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:15.920 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:15.920 00:08:15.920 NVM Specific Namespace Data 00:08:15.920 =========================== 00:08:15.920 Logical Block Storage Tag Mask: 0 00:08:15.920 Protection Information Capabilities: 00:08:15.920 16b Guard Protection Information Storage Tag Support: No 00:08:15.921 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:15.921 Storage Tag Check Read Support: No 00:08:15.921 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.921 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.921 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.921 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.921 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.921 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.921 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.921 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.921 06:42:08 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:15.921 06:42:08 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:15.921 ===================================================== 00:08:15.921 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:15.921 ===================================================== 00:08:15.921 Controller Capabilities/Features 00:08:15.921 ================================ 00:08:15.921 Vendor ID: 1b36 00:08:15.921 Subsystem Vendor ID: 1af4 00:08:15.921 Serial Number: 12341 00:08:15.921 Model Number: QEMU NVMe Ctrl 00:08:15.921 Firmware Version: 8.0.0 00:08:15.921 Recommended Arb Burst: 6 00:08:15.921 IEEE OUI Identifier: 00 54 52 00:08:15.921 Multi-path I/O 00:08:15.921 May have multiple subsystem ports: No 00:08:15.921 May have multiple controllers: No 00:08:15.921 Associated with SR-IOV VF: No 00:08:15.921 Max Data Transfer Size: 524288 00:08:15.921 Max Number of Namespaces: 256 00:08:15.921 Max Number of I/O Queues: 64 00:08:15.921 NVMe Specification Version (VS): 1.4 00:08:15.921 NVMe Specification Version (Identify): 1.4 00:08:15.921 Maximum Queue Entries: 2048 00:08:15.921 Contiguous Queues Required: Yes 00:08:15.921 Arbitration Mechanisms Supported 00:08:15.921 Weighted Round Robin: Not Supported 00:08:15.921 Vendor Specific: Not Supported 00:08:15.921 Reset Timeout: 7500 ms 00:08:15.921 Doorbell Stride: 4 bytes 00:08:15.921 NVM Subsystem Reset: Not Supported 00:08:15.921 Command Sets Supported 00:08:15.921 NVM Command Set: Supported 00:08:15.921 Boot Partition: Not Supported 00:08:15.921 Memory Page Size Minimum: 4096 bytes 00:08:15.921 Memory Page Size Maximum: 65536 bytes 00:08:15.921 Persistent Memory Region: Not Supported 00:08:15.921 Optional Asynchronous Events Supported 00:08:15.921 Namespace Attribute Notices: Supported 00:08:15.921 Firmware Activation Notices: Not Supported 00:08:15.921 ANA Change Notices: Not Supported 00:08:15.921 PLE Aggregate Log Change Notices: Not Supported 00:08:15.921 LBA Status Info Alert Notices: Not Supported 00:08:15.921 EGE Aggregate Log Change Notices: Not Supported 00:08:15.921 Normal NVM Subsystem Shutdown event: Not Supported 00:08:15.921 Zone Descriptor Change Notices: Not Supported 00:08:15.921 Discovery Log Change Notices: Not Supported 00:08:15.921 Controller Attributes 00:08:15.921 128-bit Host Identifier: Not Supported 00:08:15.921 Non-Operational Permissive Mode: Not Supported 00:08:15.921 NVM Sets: Not Supported 00:08:15.921 Read Recovery Levels: Not Supported 00:08:15.921 Endurance Groups: Not Supported 00:08:15.921 Predictable Latency Mode: Not Supported 00:08:15.921 Traffic Based Keep ALive: Not Supported 00:08:15.921 Namespace Granularity: Not Supported 00:08:15.921 SQ Associations: Not Supported 00:08:15.921 UUID List: Not Supported 00:08:15.921 Multi-Domain Subsystem: Not Supported 00:08:15.921 Fixed Capacity Management: Not Supported 00:08:15.921 Variable Capacity Management: Not Supported 00:08:15.921 Delete Endurance Group: Not Supported 00:08:15.921 Delete NVM Set: Not Supported 00:08:15.921 Extended LBA Formats Supported: Supported 00:08:15.921 Flexible Data Placement Supported: Not Supported 00:08:15.921 00:08:15.921 Controller Memory Buffer Support 00:08:15.921 ================================ 00:08:15.921 Supported: No 00:08:15.921 00:08:15.921 Persistent Memory Region Support 00:08:15.921 ================================ 00:08:15.921 Supported: No 00:08:15.921 00:08:15.921 Admin Command Set Attributes 00:08:15.921 ============================ 00:08:15.921 Security Send/Receive: Not Supported 00:08:15.921 Format NVM: Supported 00:08:15.921 Firmware Activate/Download: Not Supported 00:08:15.921 Namespace Management: Supported 00:08:15.921 Device Self-Test: Not Supported 00:08:15.921 Directives: Supported 00:08:15.921 NVMe-MI: Not Supported 00:08:15.921 Virtualization Management: Not Supported 00:08:15.921 Doorbell Buffer Config: Supported 00:08:15.921 Get LBA Status Capability: Not Supported 00:08:15.921 Command & Feature Lockdown Capability: Not Supported 00:08:15.921 Abort Command Limit: 4 00:08:15.921 Async Event Request Limit: 4 00:08:15.921 Number of Firmware Slots: N/A 00:08:15.921 Firmware Slot 1 Read-Only: N/A 00:08:15.921 Firmware Activation Without Reset: N/A 00:08:15.921 Multiple Update Detection Support: N/A 00:08:15.921 Firmware Update Granularity: No Information Provided 00:08:15.921 Per-Namespace SMART Log: Yes 00:08:15.921 Asymmetric Namespace Access Log Page: Not Supported 00:08:15.921 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:15.921 Command Effects Log Page: Supported 00:08:15.921 Get Log Page Extended Data: Supported 00:08:15.921 Telemetry Log Pages: Not Supported 00:08:15.921 Persistent Event Log Pages: Not Supported 00:08:15.921 Supported Log Pages Log Page: May Support 00:08:15.921 Commands Supported & Effects Log Page: Not Supported 00:08:15.921 Feature Identifiers & Effects Log Page:May Support 00:08:15.921 NVMe-MI Commands & Effects Log Page: May Support 00:08:15.921 Data Area 4 for Telemetry Log: Not Supported 00:08:15.921 Error Log Page Entries Supported: 1 00:08:15.921 Keep Alive: Not Supported 00:08:15.921 00:08:15.921 NVM Command Set Attributes 00:08:15.921 ========================== 00:08:15.921 Submission Queue Entry Size 00:08:15.921 Max: 64 00:08:15.921 Min: 64 00:08:15.921 Completion Queue Entry Size 00:08:15.921 Max: 16 00:08:15.921 Min: 16 00:08:15.921 Number of Namespaces: 256 00:08:15.921 Compare Command: Supported 00:08:15.921 Write Uncorrectable Command: Not Supported 00:08:15.921 Dataset Management Command: Supported 00:08:15.921 Write Zeroes Command: Supported 00:08:15.921 Set Features Save Field: Supported 00:08:15.921 Reservations: Not Supported 00:08:15.921 Timestamp: Supported 00:08:15.921 Copy: Supported 00:08:15.921 Volatile Write Cache: Present 00:08:15.921 Atomic Write Unit (Normal): 1 00:08:15.921 Atomic Write Unit (PFail): 1 00:08:15.921 Atomic Compare & Write Unit: 1 00:08:15.921 Fused Compare & Write: Not Supported 00:08:15.921 Scatter-Gather List 00:08:15.921 SGL Command Set: Supported 00:08:15.921 SGL Keyed: Not Supported 00:08:15.921 SGL Bit Bucket Descriptor: Not Supported 00:08:15.921 SGL Metadata Pointer: Not Supported 00:08:15.921 Oversized SGL: Not Supported 00:08:15.921 SGL Metadata Address: Not Supported 00:08:15.921 SGL Offset: Not Supported 00:08:15.921 Transport SGL Data Block: Not Supported 00:08:15.921 Replay Protected Memory Block: Not Supported 00:08:15.921 00:08:15.921 Firmware Slot Information 00:08:15.921 ========================= 00:08:15.921 Active slot: 1 00:08:15.921 Slot 1 Firmware Revision: 1.0 00:08:15.921 00:08:15.921 00:08:15.921 Commands Supported and Effects 00:08:15.921 ============================== 00:08:15.921 Admin Commands 00:08:15.921 -------------- 00:08:15.921 Delete I/O Submission Queue (00h): Supported 00:08:15.921 Create I/O Submission Queue (01h): Supported 00:08:15.921 Get Log Page (02h): Supported 00:08:15.921 Delete I/O Completion Queue (04h): Supported 00:08:15.921 Create I/O Completion Queue (05h): Supported 00:08:15.921 Identify (06h): Supported 00:08:15.921 Abort (08h): Supported 00:08:15.921 Set Features (09h): Supported 00:08:15.921 Get Features (0Ah): Supported 00:08:15.921 Asynchronous Event Request (0Ch): Supported 00:08:15.921 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:15.921 Directive Send (19h): Supported 00:08:15.921 Directive Receive (1Ah): Supported 00:08:15.921 Virtualization Management (1Ch): Supported 00:08:15.921 Doorbell Buffer Config (7Ch): Supported 00:08:15.921 Format NVM (80h): Supported LBA-Change 00:08:15.921 I/O Commands 00:08:15.921 ------------ 00:08:15.921 Flush (00h): Supported LBA-Change 00:08:15.921 Write (01h): Supported LBA-Change 00:08:15.921 Read (02h): Supported 00:08:15.921 Compare (05h): Supported 00:08:15.921 Write Zeroes (08h): Supported LBA-Change 00:08:15.921 Dataset Management (09h): Supported LBA-Change 00:08:15.922 Unknown (0Ch): Supported 00:08:15.922 Unknown (12h): Supported 00:08:15.922 Copy (19h): Supported LBA-Change 00:08:15.922 Unknown (1Dh): Supported LBA-Change 00:08:15.922 00:08:15.922 Error Log 00:08:15.922 ========= 00:08:15.922 00:08:15.922 Arbitration 00:08:15.922 =========== 00:08:15.922 Arbitration Burst: no limit 00:08:15.922 00:08:15.922 Power Management 00:08:15.922 ================ 00:08:15.922 Number of Power States: 1 00:08:15.922 Current Power State: Power State #0 00:08:15.922 Power State #0: 00:08:15.922 Max Power: 25.00 W 00:08:15.922 Non-Operational State: Operational 00:08:15.922 Entry Latency: 16 microseconds 00:08:15.922 Exit Latency: 4 microseconds 00:08:15.922 Relative Read Throughput: 0 00:08:15.922 Relative Read Latency: 0 00:08:15.922 Relative Write Throughput: 0 00:08:15.922 Relative Write Latency: 0 00:08:15.922 Idle Power: Not Reported 00:08:15.922 Active Power: Not Reported 00:08:15.922 Non-Operational Permissive Mode: Not Supported 00:08:15.922 00:08:15.922 Health Information 00:08:15.922 ================== 00:08:15.922 Critical Warnings: 00:08:15.922 Available Spare Space: OK 00:08:15.922 Temperature: OK 00:08:15.922 Device Reliability: OK 00:08:15.922 Read Only: No 00:08:15.922 Volatile Memory Backup: OK 00:08:15.922 Current Temperature: 323 Kelvin (50 Celsius) 00:08:15.922 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:15.922 Available Spare: 0% 00:08:15.922 Available Spare Threshold: 0% 00:08:15.922 Life Percentage Used: 0% 00:08:15.922 Data Units Read: 1070 00:08:15.922 Data Units Written: 943 00:08:15.922 Host Read Commands: 55114 00:08:15.922 Host Write Commands: 54000 00:08:15.922 Controller Busy Time: 0 minutes 00:08:15.922 Power Cycles: 0 00:08:15.922 Power On Hours: 0 hours 00:08:15.922 Unsafe Shutdowns: 0 00:08:15.922 Unrecoverable Media Errors: 0 00:08:15.922 Lifetime Error Log Entries: 0 00:08:15.922 Warning Temperature Time: 0 minutes 00:08:15.922 Critical Temperature Time: 0 minutes 00:08:15.922 00:08:15.922 Number of Queues 00:08:15.922 ================ 00:08:15.922 Number of I/O Submission Queues: 64 00:08:15.922 Number of I/O Completion Queues: 64 00:08:15.922 00:08:15.922 ZNS Specific Controller Data 00:08:15.922 ============================ 00:08:15.922 Zone Append Size Limit: 0 00:08:15.922 00:08:15.922 00:08:15.922 Active Namespaces 00:08:15.922 ================= 00:08:15.922 Namespace ID:1 00:08:15.922 Error Recovery Timeout: Unlimited 00:08:15.922 Command Set Identifier: NVM (00h) 00:08:15.922 Deallocate: Supported 00:08:15.922 Deallocated/Unwritten Error: Supported 00:08:15.922 Deallocated Read Value: All 0x00 00:08:15.922 Deallocate in Write Zeroes: Not Supported 00:08:15.922 Deallocated Guard Field: 0xFFFF 00:08:15.922 Flush: Supported 00:08:15.922 Reservation: Not Supported 00:08:15.922 Namespace Sharing Capabilities: Private 00:08:15.922 Size (in LBAs): 1310720 (5GiB) 00:08:15.922 Capacity (in LBAs): 1310720 (5GiB) 00:08:15.922 Utilization (in LBAs): 1310720 (5GiB) 00:08:15.922 Thin Provisioning: Not Supported 00:08:15.922 Per-NS Atomic Units: No 00:08:15.922 Maximum Single Source Range Length: 128 00:08:15.922 Maximum Copy Length: 128 00:08:15.922 Maximum Source Range Count: 128 00:08:15.922 NGUID/EUI64 Never Reused: No 00:08:15.922 Namespace Write Protected: No 00:08:15.922 Number of LBA Formats: 8 00:08:15.922 Current LBA Format: LBA Format #04 00:08:15.922 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:15.922 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:15.922 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:15.922 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:15.922 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:15.922 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:15.922 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:15.922 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:15.922 00:08:15.922 NVM Specific Namespace Data 00:08:15.922 =========================== 00:08:15.922 Logical Block Storage Tag Mask: 0 00:08:15.922 Protection Information Capabilities: 00:08:15.922 16b Guard Protection Information Storage Tag Support: No 00:08:15.922 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:15.922 Storage Tag Check Read Support: No 00:08:15.922 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.922 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.922 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.922 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.922 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.922 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.922 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.922 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.922 06:42:08 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:15.922 06:42:08 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:16.184 ===================================================== 00:08:16.184 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:16.184 ===================================================== 00:08:16.184 Controller Capabilities/Features 00:08:16.184 ================================ 00:08:16.184 Vendor ID: 1b36 00:08:16.184 Subsystem Vendor ID: 1af4 00:08:16.184 Serial Number: 12342 00:08:16.184 Model Number: QEMU NVMe Ctrl 00:08:16.184 Firmware Version: 8.0.0 00:08:16.184 Recommended Arb Burst: 6 00:08:16.184 IEEE OUI Identifier: 00 54 52 00:08:16.184 Multi-path I/O 00:08:16.184 May have multiple subsystem ports: No 00:08:16.184 May have multiple controllers: No 00:08:16.184 Associated with SR-IOV VF: No 00:08:16.184 Max Data Transfer Size: 524288 00:08:16.184 Max Number of Namespaces: 256 00:08:16.184 Max Number of I/O Queues: 64 00:08:16.184 NVMe Specification Version (VS): 1.4 00:08:16.184 NVMe Specification Version (Identify): 1.4 00:08:16.184 Maximum Queue Entries: 2048 00:08:16.184 Contiguous Queues Required: Yes 00:08:16.184 Arbitration Mechanisms Supported 00:08:16.184 Weighted Round Robin: Not Supported 00:08:16.184 Vendor Specific: Not Supported 00:08:16.184 Reset Timeout: 7500 ms 00:08:16.184 Doorbell Stride: 4 bytes 00:08:16.184 NVM Subsystem Reset: Not Supported 00:08:16.184 Command Sets Supported 00:08:16.184 NVM Command Set: Supported 00:08:16.184 Boot Partition: Not Supported 00:08:16.184 Memory Page Size Minimum: 4096 bytes 00:08:16.184 Memory Page Size Maximum: 65536 bytes 00:08:16.184 Persistent Memory Region: Not Supported 00:08:16.184 Optional Asynchronous Events Supported 00:08:16.184 Namespace Attribute Notices: Supported 00:08:16.184 Firmware Activation Notices: Not Supported 00:08:16.184 ANA Change Notices: Not Supported 00:08:16.184 PLE Aggregate Log Change Notices: Not Supported 00:08:16.184 LBA Status Info Alert Notices: Not Supported 00:08:16.184 EGE Aggregate Log Change Notices: Not Supported 00:08:16.184 Normal NVM Subsystem Shutdown event: Not Supported 00:08:16.184 Zone Descriptor Change Notices: Not Supported 00:08:16.184 Discovery Log Change Notices: Not Supported 00:08:16.184 Controller Attributes 00:08:16.184 128-bit Host Identifier: Not Supported 00:08:16.184 Non-Operational Permissive Mode: Not Supported 00:08:16.184 NVM Sets: Not Supported 00:08:16.184 Read Recovery Levels: Not Supported 00:08:16.184 Endurance Groups: Not Supported 00:08:16.184 Predictable Latency Mode: Not Supported 00:08:16.184 Traffic Based Keep ALive: Not Supported 00:08:16.184 Namespace Granularity: Not Supported 00:08:16.184 SQ Associations: Not Supported 00:08:16.184 UUID List: Not Supported 00:08:16.184 Multi-Domain Subsystem: Not Supported 00:08:16.184 Fixed Capacity Management: Not Supported 00:08:16.184 Variable Capacity Management: Not Supported 00:08:16.184 Delete Endurance Group: Not Supported 00:08:16.184 Delete NVM Set: Not Supported 00:08:16.184 Extended LBA Formats Supported: Supported 00:08:16.184 Flexible Data Placement Supported: Not Supported 00:08:16.184 00:08:16.184 Controller Memory Buffer Support 00:08:16.184 ================================ 00:08:16.184 Supported: No 00:08:16.184 00:08:16.184 Persistent Memory Region Support 00:08:16.185 ================================ 00:08:16.185 Supported: No 00:08:16.185 00:08:16.185 Admin Command Set Attributes 00:08:16.185 ============================ 00:08:16.185 Security Send/Receive: Not Supported 00:08:16.185 Format NVM: Supported 00:08:16.185 Firmware Activate/Download: Not Supported 00:08:16.185 Namespace Management: Supported 00:08:16.185 Device Self-Test: Not Supported 00:08:16.185 Directives: Supported 00:08:16.185 NVMe-MI: Not Supported 00:08:16.185 Virtualization Management: Not Supported 00:08:16.185 Doorbell Buffer Config: Supported 00:08:16.185 Get LBA Status Capability: Not Supported 00:08:16.185 Command & Feature Lockdown Capability: Not Supported 00:08:16.185 Abort Command Limit: 4 00:08:16.185 Async Event Request Limit: 4 00:08:16.185 Number of Firmware Slots: N/A 00:08:16.185 Firmware Slot 1 Read-Only: N/A 00:08:16.185 Firmware Activation Without Reset: N/A 00:08:16.185 Multiple Update Detection Support: N/A 00:08:16.185 Firmware Update Granularity: No Information Provided 00:08:16.185 Per-Namespace SMART Log: Yes 00:08:16.185 Asymmetric Namespace Access Log Page: Not Supported 00:08:16.185 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:16.185 Command Effects Log Page: Supported 00:08:16.185 Get Log Page Extended Data: Supported 00:08:16.185 Telemetry Log Pages: Not Supported 00:08:16.185 Persistent Event Log Pages: Not Supported 00:08:16.185 Supported Log Pages Log Page: May Support 00:08:16.185 Commands Supported & Effects Log Page: Not Supported 00:08:16.185 Feature Identifiers & Effects Log Page:May Support 00:08:16.185 NVMe-MI Commands & Effects Log Page: May Support 00:08:16.185 Data Area 4 for Telemetry Log: Not Supported 00:08:16.185 Error Log Page Entries Supported: 1 00:08:16.185 Keep Alive: Not Supported 00:08:16.185 00:08:16.185 NVM Command Set Attributes 00:08:16.185 ========================== 00:08:16.185 Submission Queue Entry Size 00:08:16.185 Max: 64 00:08:16.185 Min: 64 00:08:16.185 Completion Queue Entry Size 00:08:16.185 Max: 16 00:08:16.185 Min: 16 00:08:16.185 Number of Namespaces: 256 00:08:16.185 Compare Command: Supported 00:08:16.185 Write Uncorrectable Command: Not Supported 00:08:16.185 Dataset Management Command: Supported 00:08:16.185 Write Zeroes Command: Supported 00:08:16.185 Set Features Save Field: Supported 00:08:16.185 Reservations: Not Supported 00:08:16.185 Timestamp: Supported 00:08:16.185 Copy: Supported 00:08:16.185 Volatile Write Cache: Present 00:08:16.185 Atomic Write Unit (Normal): 1 00:08:16.185 Atomic Write Unit (PFail): 1 00:08:16.185 Atomic Compare & Write Unit: 1 00:08:16.185 Fused Compare & Write: Not Supported 00:08:16.185 Scatter-Gather List 00:08:16.185 SGL Command Set: Supported 00:08:16.185 SGL Keyed: Not Supported 00:08:16.185 SGL Bit Bucket Descriptor: Not Supported 00:08:16.185 SGL Metadata Pointer: Not Supported 00:08:16.185 Oversized SGL: Not Supported 00:08:16.185 SGL Metadata Address: Not Supported 00:08:16.185 SGL Offset: Not Supported 00:08:16.185 Transport SGL Data Block: Not Supported 00:08:16.185 Replay Protected Memory Block: Not Supported 00:08:16.185 00:08:16.185 Firmware Slot Information 00:08:16.185 ========================= 00:08:16.185 Active slot: 1 00:08:16.185 Slot 1 Firmware Revision: 1.0 00:08:16.185 00:08:16.185 00:08:16.185 Commands Supported and Effects 00:08:16.185 ============================== 00:08:16.185 Admin Commands 00:08:16.185 -------------- 00:08:16.185 Delete I/O Submission Queue (00h): Supported 00:08:16.185 Create I/O Submission Queue (01h): Supported 00:08:16.185 Get Log Page (02h): Supported 00:08:16.185 Delete I/O Completion Queue (04h): Supported 00:08:16.185 Create I/O Completion Queue (05h): Supported 00:08:16.185 Identify (06h): Supported 00:08:16.185 Abort (08h): Supported 00:08:16.185 Set Features (09h): Supported 00:08:16.185 Get Features (0Ah): Supported 00:08:16.185 Asynchronous Event Request (0Ch): Supported 00:08:16.185 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:16.185 Directive Send (19h): Supported 00:08:16.185 Directive Receive (1Ah): Supported 00:08:16.185 Virtualization Management (1Ch): Supported 00:08:16.185 Doorbell Buffer Config (7Ch): Supported 00:08:16.185 Format NVM (80h): Supported LBA-Change 00:08:16.185 I/O Commands 00:08:16.185 ------------ 00:08:16.185 Flush (00h): Supported LBA-Change 00:08:16.185 Write (01h): Supported LBA-Change 00:08:16.185 Read (02h): Supported 00:08:16.185 Compare (05h): Supported 00:08:16.185 Write Zeroes (08h): Supported LBA-Change 00:08:16.185 Dataset Management (09h): Supported LBA-Change 00:08:16.185 Unknown (0Ch): Supported 00:08:16.185 Unknown (12h): Supported 00:08:16.185 Copy (19h): Supported LBA-Change 00:08:16.185 Unknown (1Dh): Supported LBA-Change 00:08:16.185 00:08:16.185 Error Log 00:08:16.185 ========= 00:08:16.185 00:08:16.185 Arbitration 00:08:16.185 =========== 00:08:16.185 Arbitration Burst: no limit 00:08:16.185 00:08:16.185 Power Management 00:08:16.185 ================ 00:08:16.185 Number of Power States: 1 00:08:16.185 Current Power State: Power State #0 00:08:16.185 Power State #0: 00:08:16.185 Max Power: 25.00 W 00:08:16.185 Non-Operational State: Operational 00:08:16.185 Entry Latency: 16 microseconds 00:08:16.185 Exit Latency: 4 microseconds 00:08:16.185 Relative Read Throughput: 0 00:08:16.185 Relative Read Latency: 0 00:08:16.185 Relative Write Throughput: 0 00:08:16.185 Relative Write Latency: 0 00:08:16.185 Idle Power: Not Reported 00:08:16.185 Active Power: Not Reported 00:08:16.185 Non-Operational Permissive Mode: Not Supported 00:08:16.185 00:08:16.185 Health Information 00:08:16.185 ================== 00:08:16.185 Critical Warnings: 00:08:16.185 Available Spare Space: OK 00:08:16.185 Temperature: OK 00:08:16.185 Device Reliability: OK 00:08:16.185 Read Only: No 00:08:16.185 Volatile Memory Backup: OK 00:08:16.185 Current Temperature: 323 Kelvin (50 Celsius) 00:08:16.185 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:16.185 Available Spare: 0% 00:08:16.185 Available Spare Threshold: 0% 00:08:16.185 Life Percentage Used: 0% 00:08:16.185 Data Units Read: 2297 00:08:16.185 Data Units Written: 2084 00:08:16.185 Host Read Commands: 116285 00:08:16.185 Host Write Commands: 114554 00:08:16.185 Controller Busy Time: 0 minutes 00:08:16.185 Power Cycles: 0 00:08:16.185 Power On Hours: 0 hours 00:08:16.185 Unsafe Shutdowns: 0 00:08:16.185 Unrecoverable Media Errors: 0 00:08:16.185 Lifetime Error Log Entries: 0 00:08:16.185 Warning Temperature Time: 0 minutes 00:08:16.185 Critical Temperature Time: 0 minutes 00:08:16.185 00:08:16.185 Number of Queues 00:08:16.185 ================ 00:08:16.185 Number of I/O Submission Queues: 64 00:08:16.185 Number of I/O Completion Queues: 64 00:08:16.185 00:08:16.185 ZNS Specific Controller Data 00:08:16.185 ============================ 00:08:16.185 Zone Append Size Limit: 0 00:08:16.185 00:08:16.185 00:08:16.185 Active Namespaces 00:08:16.185 ================= 00:08:16.185 Namespace ID:1 00:08:16.185 Error Recovery Timeout: Unlimited 00:08:16.185 Command Set Identifier: NVM (00h) 00:08:16.185 Deallocate: Supported 00:08:16.185 Deallocated/Unwritten Error: Supported 00:08:16.185 Deallocated Read Value: All 0x00 00:08:16.185 Deallocate in Write Zeroes: Not Supported 00:08:16.185 Deallocated Guard Field: 0xFFFF 00:08:16.185 Flush: Supported 00:08:16.185 Reservation: Not Supported 00:08:16.185 Namespace Sharing Capabilities: Private 00:08:16.185 Size (in LBAs): 1048576 (4GiB) 00:08:16.185 Capacity (in LBAs): 1048576 (4GiB) 00:08:16.185 Utilization (in LBAs): 1048576 (4GiB) 00:08:16.185 Thin Provisioning: Not Supported 00:08:16.185 Per-NS Atomic Units: No 00:08:16.185 Maximum Single Source Range Length: 128 00:08:16.185 Maximum Copy Length: 128 00:08:16.185 Maximum Source Range Count: 128 00:08:16.185 NGUID/EUI64 Never Reused: No 00:08:16.185 Namespace Write Protected: No 00:08:16.185 Number of LBA Formats: 8 00:08:16.185 Current LBA Format: LBA Format #04 00:08:16.185 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:16.185 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:16.185 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:16.185 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:16.185 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:16.185 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:16.185 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:16.185 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:16.185 00:08:16.186 NVM Specific Namespace Data 00:08:16.186 =========================== 00:08:16.186 Logical Block Storage Tag Mask: 0 00:08:16.186 Protection Information Capabilities: 00:08:16.186 16b Guard Protection Information Storage Tag Support: No 00:08:16.186 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:16.186 Storage Tag Check Read Support: No 00:08:16.186 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.186 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.186 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.186 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.186 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.186 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.186 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.186 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.186 Namespace ID:2 00:08:16.186 Error Recovery Timeout: Unlimited 00:08:16.186 Command Set Identifier: NVM (00h) 00:08:16.186 Deallocate: Supported 00:08:16.186 Deallocated/Unwritten Error: Supported 00:08:16.186 Deallocated Read Value: All 0x00 00:08:16.186 Deallocate in Write Zeroes: Not Supported 00:08:16.186 Deallocated Guard Field: 0xFFFF 00:08:16.186 Flush: Supported 00:08:16.186 Reservation: Not Supported 00:08:16.186 Namespace Sharing Capabilities: Private 00:08:16.186 Size (in LBAs): 1048576 (4GiB) 00:08:16.186 Capacity (in LBAs): 1048576 (4GiB) 00:08:16.186 Utilization (in LBAs): 1048576 (4GiB) 00:08:16.186 Thin Provisioning: Not Supported 00:08:16.186 Per-NS Atomic Units: No 00:08:16.186 Maximum Single Source Range Length: 128 00:08:16.186 Maximum Copy Length: 128 00:08:16.186 Maximum Source Range Count: 128 00:08:16.186 NGUID/EUI64 Never Reused: No 00:08:16.186 Namespace Write Protected: No 00:08:16.186 Number of LBA Formats: 8 00:08:16.186 Current LBA Format: LBA Format #04 00:08:16.186 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:16.186 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:16.186 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:16.186 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:16.186 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:16.186 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:16.186 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:16.186 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:16.186 00:08:16.186 NVM Specific Namespace Data 00:08:16.186 =========================== 00:08:16.186 Logical Block Storage Tag Mask: 0 00:08:16.186 Protection Information Capabilities: 00:08:16.186 16b Guard Protection Information Storage Tag Support: No 00:08:16.186 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:16.186 Storage Tag Check Read Support: No 00:08:16.186 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.186 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.186 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.186 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.186 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.186 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.186 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.186 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.186 Namespace ID:3 00:08:16.186 Error Recovery Timeout: Unlimited 00:08:16.186 Command Set Identifier: NVM (00h) 00:08:16.186 Deallocate: Supported 00:08:16.186 Deallocated/Unwritten Error: Supported 00:08:16.186 Deallocated Read Value: All 0x00 00:08:16.186 Deallocate in Write Zeroes: Not Supported 00:08:16.186 Deallocated Guard Field: 0xFFFF 00:08:16.186 Flush: Supported 00:08:16.186 Reservation: Not Supported 00:08:16.186 Namespace Sharing Capabilities: Private 00:08:16.186 Size (in LBAs): 1048576 (4GiB) 00:08:16.186 Capacity (in LBAs): 1048576 (4GiB) 00:08:16.186 Utilization (in LBAs): 1048576 (4GiB) 00:08:16.186 Thin Provisioning: Not Supported 00:08:16.186 Per-NS Atomic Units: No 00:08:16.186 Maximum Single Source Range Length: 128 00:08:16.186 Maximum Copy Length: 128 00:08:16.186 Maximum Source Range Count: 128 00:08:16.186 NGUID/EUI64 Never Reused: No 00:08:16.186 Namespace Write Protected: No 00:08:16.186 Number of LBA Formats: 8 00:08:16.186 Current LBA Format: LBA Format #04 00:08:16.186 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:16.186 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:16.186 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:16.186 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:16.186 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:16.186 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:16.186 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:16.186 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:16.186 00:08:16.186 NVM Specific Namespace Data 00:08:16.186 =========================== 00:08:16.186 Logical Block Storage Tag Mask: 0 00:08:16.186 Protection Information Capabilities: 00:08:16.186 16b Guard Protection Information Storage Tag Support: No 00:08:16.186 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:16.186 Storage Tag Check Read Support: No 00:08:16.186 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.186 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.186 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.186 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.186 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.186 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.186 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.186 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.186 06:42:09 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:16.186 06:42:09 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:16.448 ===================================================== 00:08:16.448 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:16.448 ===================================================== 00:08:16.448 Controller Capabilities/Features 00:08:16.448 ================================ 00:08:16.448 Vendor ID: 1b36 00:08:16.448 Subsystem Vendor ID: 1af4 00:08:16.448 Serial Number: 12343 00:08:16.448 Model Number: QEMU NVMe Ctrl 00:08:16.448 Firmware Version: 8.0.0 00:08:16.448 Recommended Arb Burst: 6 00:08:16.448 IEEE OUI Identifier: 00 54 52 00:08:16.448 Multi-path I/O 00:08:16.448 May have multiple subsystem ports: No 00:08:16.448 May have multiple controllers: Yes 00:08:16.448 Associated with SR-IOV VF: No 00:08:16.448 Max Data Transfer Size: 524288 00:08:16.448 Max Number of Namespaces: 256 00:08:16.448 Max Number of I/O Queues: 64 00:08:16.448 NVMe Specification Version (VS): 1.4 00:08:16.448 NVMe Specification Version (Identify): 1.4 00:08:16.448 Maximum Queue Entries: 2048 00:08:16.448 Contiguous Queues Required: Yes 00:08:16.448 Arbitration Mechanisms Supported 00:08:16.448 Weighted Round Robin: Not Supported 00:08:16.448 Vendor Specific: Not Supported 00:08:16.448 Reset Timeout: 7500 ms 00:08:16.448 Doorbell Stride: 4 bytes 00:08:16.448 NVM Subsystem Reset: Not Supported 00:08:16.448 Command Sets Supported 00:08:16.448 NVM Command Set: Supported 00:08:16.448 Boot Partition: Not Supported 00:08:16.448 Memory Page Size Minimum: 4096 bytes 00:08:16.448 Memory Page Size Maximum: 65536 bytes 00:08:16.448 Persistent Memory Region: Not Supported 00:08:16.448 Optional Asynchronous Events Supported 00:08:16.448 Namespace Attribute Notices: Supported 00:08:16.448 Firmware Activation Notices: Not Supported 00:08:16.448 ANA Change Notices: Not Supported 00:08:16.448 PLE Aggregate Log Change Notices: Not Supported 00:08:16.448 LBA Status Info Alert Notices: Not Supported 00:08:16.448 EGE Aggregate Log Change Notices: Not Supported 00:08:16.448 Normal NVM Subsystem Shutdown event: Not Supported 00:08:16.448 Zone Descriptor Change Notices: Not Supported 00:08:16.448 Discovery Log Change Notices: Not Supported 00:08:16.448 Controller Attributes 00:08:16.448 128-bit Host Identifier: Not Supported 00:08:16.448 Non-Operational Permissive Mode: Not Supported 00:08:16.448 NVM Sets: Not Supported 00:08:16.448 Read Recovery Levels: Not Supported 00:08:16.448 Endurance Groups: Supported 00:08:16.448 Predictable Latency Mode: Not Supported 00:08:16.448 Traffic Based Keep ALive: Not Supported 00:08:16.448 Namespace Granularity: Not Supported 00:08:16.448 SQ Associations: Not Supported 00:08:16.448 UUID List: Not Supported 00:08:16.448 Multi-Domain Subsystem: Not Supported 00:08:16.448 Fixed Capacity Management: Not Supported 00:08:16.448 Variable Capacity Management: Not Supported 00:08:16.448 Delete Endurance Group: Not Supported 00:08:16.448 Delete NVM Set: Not Supported 00:08:16.448 Extended LBA Formats Supported: Supported 00:08:16.448 Flexible Data Placement Supported: Supported 00:08:16.448 00:08:16.448 Controller Memory Buffer Support 00:08:16.448 ================================ 00:08:16.448 Supported: No 00:08:16.448 00:08:16.448 Persistent Memory Region Support 00:08:16.448 ================================ 00:08:16.448 Supported: No 00:08:16.448 00:08:16.448 Admin Command Set Attributes 00:08:16.448 ============================ 00:08:16.448 Security Send/Receive: Not Supported 00:08:16.448 Format NVM: Supported 00:08:16.448 Firmware Activate/Download: Not Supported 00:08:16.448 Namespace Management: Supported 00:08:16.448 Device Self-Test: Not Supported 00:08:16.448 Directives: Supported 00:08:16.448 NVMe-MI: Not Supported 00:08:16.448 Virtualization Management: Not Supported 00:08:16.448 Doorbell Buffer Config: Supported 00:08:16.448 Get LBA Status Capability: Not Supported 00:08:16.448 Command & Feature Lockdown Capability: Not Supported 00:08:16.448 Abort Command Limit: 4 00:08:16.448 Async Event Request Limit: 4 00:08:16.448 Number of Firmware Slots: N/A 00:08:16.448 Firmware Slot 1 Read-Only: N/A 00:08:16.448 Firmware Activation Without Reset: N/A 00:08:16.448 Multiple Update Detection Support: N/A 00:08:16.448 Firmware Update Granularity: No Information Provided 00:08:16.448 Per-Namespace SMART Log: Yes 00:08:16.448 Asymmetric Namespace Access Log Page: Not Supported 00:08:16.448 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:16.448 Command Effects Log Page: Supported 00:08:16.448 Get Log Page Extended Data: Supported 00:08:16.448 Telemetry Log Pages: Not Supported 00:08:16.448 Persistent Event Log Pages: Not Supported 00:08:16.448 Supported Log Pages Log Page: May Support 00:08:16.448 Commands Supported & Effects Log Page: Not Supported 00:08:16.448 Feature Identifiers & Effects Log Page:May Support 00:08:16.448 NVMe-MI Commands & Effects Log Page: May Support 00:08:16.448 Data Area 4 for Telemetry Log: Not Supported 00:08:16.448 Error Log Page Entries Supported: 1 00:08:16.448 Keep Alive: Not Supported 00:08:16.448 00:08:16.448 NVM Command Set Attributes 00:08:16.448 ========================== 00:08:16.448 Submission Queue Entry Size 00:08:16.448 Max: 64 00:08:16.448 Min: 64 00:08:16.448 Completion Queue Entry Size 00:08:16.448 Max: 16 00:08:16.448 Min: 16 00:08:16.448 Number of Namespaces: 256 00:08:16.448 Compare Command: Supported 00:08:16.448 Write Uncorrectable Command: Not Supported 00:08:16.448 Dataset Management Command: Supported 00:08:16.448 Write Zeroes Command: Supported 00:08:16.448 Set Features Save Field: Supported 00:08:16.448 Reservations: Not Supported 00:08:16.448 Timestamp: Supported 00:08:16.448 Copy: Supported 00:08:16.448 Volatile Write Cache: Present 00:08:16.449 Atomic Write Unit (Normal): 1 00:08:16.449 Atomic Write Unit (PFail): 1 00:08:16.449 Atomic Compare & Write Unit: 1 00:08:16.449 Fused Compare & Write: Not Supported 00:08:16.449 Scatter-Gather List 00:08:16.449 SGL Command Set: Supported 00:08:16.449 SGL Keyed: Not Supported 00:08:16.449 SGL Bit Bucket Descriptor: Not Supported 00:08:16.449 SGL Metadata Pointer: Not Supported 00:08:16.449 Oversized SGL: Not Supported 00:08:16.449 SGL Metadata Address: Not Supported 00:08:16.449 SGL Offset: Not Supported 00:08:16.449 Transport SGL Data Block: Not Supported 00:08:16.449 Replay Protected Memory Block: Not Supported 00:08:16.449 00:08:16.449 Firmware Slot Information 00:08:16.449 ========================= 00:08:16.449 Active slot: 1 00:08:16.449 Slot 1 Firmware Revision: 1.0 00:08:16.449 00:08:16.449 00:08:16.449 Commands Supported and Effects 00:08:16.449 ============================== 00:08:16.449 Admin Commands 00:08:16.449 -------------- 00:08:16.449 Delete I/O Submission Queue (00h): Supported 00:08:16.449 Create I/O Submission Queue (01h): Supported 00:08:16.449 Get Log Page (02h): Supported 00:08:16.449 Delete I/O Completion Queue (04h): Supported 00:08:16.449 Create I/O Completion Queue (05h): Supported 00:08:16.449 Identify (06h): Supported 00:08:16.449 Abort (08h): Supported 00:08:16.449 Set Features (09h): Supported 00:08:16.449 Get Features (0Ah): Supported 00:08:16.449 Asynchronous Event Request (0Ch): Supported 00:08:16.449 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:16.449 Directive Send (19h): Supported 00:08:16.449 Directive Receive (1Ah): Supported 00:08:16.449 Virtualization Management (1Ch): Supported 00:08:16.449 Doorbell Buffer Config (7Ch): Supported 00:08:16.449 Format NVM (80h): Supported LBA-Change 00:08:16.449 I/O Commands 00:08:16.449 ------------ 00:08:16.449 Flush (00h): Supported LBA-Change 00:08:16.449 Write (01h): Supported LBA-Change 00:08:16.449 Read (02h): Supported 00:08:16.449 Compare (05h): Supported 00:08:16.449 Write Zeroes (08h): Supported LBA-Change 00:08:16.449 Dataset Management (09h): Supported LBA-Change 00:08:16.449 Unknown (0Ch): Supported 00:08:16.449 Unknown (12h): Supported 00:08:16.449 Copy (19h): Supported LBA-Change 00:08:16.449 Unknown (1Dh): Supported LBA-Change 00:08:16.449 00:08:16.449 Error Log 00:08:16.449 ========= 00:08:16.449 00:08:16.449 Arbitration 00:08:16.449 =========== 00:08:16.449 Arbitration Burst: no limit 00:08:16.449 00:08:16.449 Power Management 00:08:16.449 ================ 00:08:16.449 Number of Power States: 1 00:08:16.449 Current Power State: Power State #0 00:08:16.449 Power State #0: 00:08:16.449 Max Power: 25.00 W 00:08:16.449 Non-Operational State: Operational 00:08:16.449 Entry Latency: 16 microseconds 00:08:16.449 Exit Latency: 4 microseconds 00:08:16.449 Relative Read Throughput: 0 00:08:16.449 Relative Read Latency: 0 00:08:16.449 Relative Write Throughput: 0 00:08:16.449 Relative Write Latency: 0 00:08:16.449 Idle Power: Not Reported 00:08:16.449 Active Power: Not Reported 00:08:16.449 Non-Operational Permissive Mode: Not Supported 00:08:16.449 00:08:16.449 Health Information 00:08:16.449 ================== 00:08:16.449 Critical Warnings: 00:08:16.449 Available Spare Space: OK 00:08:16.449 Temperature: OK 00:08:16.449 Device Reliability: OK 00:08:16.449 Read Only: No 00:08:16.449 Volatile Memory Backup: OK 00:08:16.449 Current Temperature: 323 Kelvin (50 Celsius) 00:08:16.449 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:16.449 Available Spare: 0% 00:08:16.449 Available Spare Threshold: 0% 00:08:16.449 Life Percentage Used: 0% 00:08:16.449 Data Units Read: 871 00:08:16.449 Data Units Written: 800 00:08:16.449 Host Read Commands: 39580 00:08:16.449 Host Write Commands: 39003 00:08:16.449 Controller Busy Time: 0 minutes 00:08:16.449 Power Cycles: 0 00:08:16.449 Power On Hours: 0 hours 00:08:16.449 Unsafe Shutdowns: 0 00:08:16.449 Unrecoverable Media Errors: 0 00:08:16.449 Lifetime Error Log Entries: 0 00:08:16.449 Warning Temperature Time: 0 minutes 00:08:16.449 Critical Temperature Time: 0 minutes 00:08:16.449 00:08:16.449 Number of Queues 00:08:16.449 ================ 00:08:16.449 Number of I/O Submission Queues: 64 00:08:16.449 Number of I/O Completion Queues: 64 00:08:16.449 00:08:16.449 ZNS Specific Controller Data 00:08:16.449 ============================ 00:08:16.449 Zone Append Size Limit: 0 00:08:16.449 00:08:16.449 00:08:16.449 Active Namespaces 00:08:16.449 ================= 00:08:16.449 Namespace ID:1 00:08:16.449 Error Recovery Timeout: Unlimited 00:08:16.449 Command Set Identifier: NVM (00h) 00:08:16.449 Deallocate: Supported 00:08:16.449 Deallocated/Unwritten Error: Supported 00:08:16.449 Deallocated Read Value: All 0x00 00:08:16.449 Deallocate in Write Zeroes: Not Supported 00:08:16.449 Deallocated Guard Field: 0xFFFF 00:08:16.449 Flush: Supported 00:08:16.449 Reservation: Not Supported 00:08:16.449 Namespace Sharing Capabilities: Multiple Controllers 00:08:16.449 Size (in LBAs): 262144 (1GiB) 00:08:16.449 Capacity (in LBAs): 262144 (1GiB) 00:08:16.449 Utilization (in LBAs): 262144 (1GiB) 00:08:16.449 Thin Provisioning: Not Supported 00:08:16.449 Per-NS Atomic Units: No 00:08:16.449 Maximum Single Source Range Length: 128 00:08:16.449 Maximum Copy Length: 128 00:08:16.449 Maximum Source Range Count: 128 00:08:16.449 NGUID/EUI64 Never Reused: No 00:08:16.449 Namespace Write Protected: No 00:08:16.449 Endurance group ID: 1 00:08:16.449 Number of LBA Formats: 8 00:08:16.449 Current LBA Format: LBA Format #04 00:08:16.449 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:16.449 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:16.449 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:16.449 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:16.449 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:16.449 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:16.449 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:16.449 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:16.449 00:08:16.449 Get Feature FDP: 00:08:16.449 ================ 00:08:16.449 Enabled: Yes 00:08:16.449 FDP configuration index: 0 00:08:16.449 00:08:16.449 FDP configurations log page 00:08:16.449 =========================== 00:08:16.449 Number of FDP configurations: 1 00:08:16.449 Version: 0 00:08:16.449 Size: 112 00:08:16.449 FDP Configuration Descriptor: 0 00:08:16.449 Descriptor Size: 96 00:08:16.449 Reclaim Group Identifier format: 2 00:08:16.449 FDP Volatile Write Cache: Not Present 00:08:16.449 FDP Configuration: Valid 00:08:16.449 Vendor Specific Size: 0 00:08:16.449 Number of Reclaim Groups: 2 00:08:16.449 Number of Recalim Unit Handles: 8 00:08:16.449 Max Placement Identifiers: 128 00:08:16.449 Number of Namespaces Suppprted: 256 00:08:16.449 Reclaim unit Nominal Size: 6000000 bytes 00:08:16.449 Estimated Reclaim Unit Time Limit: Not Reported 00:08:16.449 RUH Desc #000: RUH Type: Initially Isolated 00:08:16.449 RUH Desc #001: RUH Type: Initially Isolated 00:08:16.449 RUH Desc #002: RUH Type: Initially Isolated 00:08:16.450 RUH Desc #003: RUH Type: Initially Isolated 00:08:16.450 RUH Desc #004: RUH Type: Initially Isolated 00:08:16.450 RUH Desc #005: RUH Type: Initially Isolated 00:08:16.450 RUH Desc #006: RUH Type: Initially Isolated 00:08:16.450 RUH Desc #007: RUH Type: Initially Isolated 00:08:16.450 00:08:16.450 FDP reclaim unit handle usage log page 00:08:16.450 ====================================== 00:08:16.450 Number of Reclaim Unit Handles: 8 00:08:16.450 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:16.450 RUH Usage Desc #001: RUH Attributes: Unused 00:08:16.450 RUH Usage Desc #002: RUH Attributes: Unused 00:08:16.450 RUH Usage Desc #003: RUH Attributes: Unused 00:08:16.450 RUH Usage Desc #004: RUH Attributes: Unused 00:08:16.450 RUH Usage Desc #005: RUH Attributes: Unused 00:08:16.450 RUH Usage Desc #006: RUH Attributes: Unused 00:08:16.450 RUH Usage Desc #007: RUH Attributes: Unused 00:08:16.450 00:08:16.450 FDP statistics log page 00:08:16.450 ======================= 00:08:16.450 Host bytes with metadata written: 504930304 00:08:16.450 Media bytes with metadata written: 504987648 00:08:16.450 Media bytes erased: 0 00:08:16.450 00:08:16.450 FDP events log page 00:08:16.450 =================== 00:08:16.450 Number of FDP events: 0 00:08:16.450 00:08:16.450 NVM Specific Namespace Data 00:08:16.450 =========================== 00:08:16.450 Logical Block Storage Tag Mask: 0 00:08:16.450 Protection Information Capabilities: 00:08:16.450 16b Guard Protection Information Storage Tag Support: No 00:08:16.450 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:16.450 Storage Tag Check Read Support: No 00:08:16.450 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.450 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.450 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.450 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.450 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.450 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.450 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.450 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:16.450 00:08:16.450 real 0m1.175s 00:08:16.450 user 0m0.417s 00:08:16.450 sys 0m0.535s 00:08:16.450 06:42:09 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.450 06:42:09 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:16.450 ************************************ 00:08:16.450 END TEST nvme_identify 00:08:16.450 ************************************ 00:08:16.450 06:42:09 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:16.450 06:42:09 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:16.450 06:42:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.450 06:42:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:16.450 ************************************ 00:08:16.450 START TEST nvme_perf 00:08:16.450 ************************************ 00:08:16.450 06:42:09 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:08:16.450 06:42:09 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:17.835 Initializing NVMe Controllers 00:08:17.835 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:17.835 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:17.835 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:17.835 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:17.835 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:17.835 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:17.835 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:17.835 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:17.835 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:17.835 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:17.835 Initialization complete. Launching workers. 00:08:17.835 ======================================================== 00:08:17.835 Latency(us) 00:08:17.835 Device Information : IOPS MiB/s Average min max 00:08:17.835 PCIE (0000:00:11.0) NSID 1 from core 0: 17239.21 202.02 7427.06 4524.19 45170.71 00:08:17.835 PCIE (0000:00:13.0) NSID 1 from core 0: 17239.21 202.02 7419.46 4319.64 45289.42 00:08:17.835 PCIE (0000:00:10.0) NSID 1 from core 0: 17303.06 202.77 7382.23 4058.19 36850.51 00:08:17.835 PCIE (0000:00:12.0) NSID 1 from core 0: 17303.06 202.77 7374.27 3913.17 36528.46 00:08:17.835 PCIE (0000:00:12.0) NSID 2 from core 0: 17303.06 202.77 7365.43 3414.91 37249.16 00:08:17.835 PCIE (0000:00:12.0) NSID 3 from core 0: 17303.06 202.77 7356.77 3221.66 36851.11 00:08:17.835 ======================================================== 00:08:17.835 Total : 103690.67 1215.13 7387.49 3221.66 45289.42 00:08:17.835 00:08:17.835 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:17.835 ================================================================================= 00:08:17.835 1.00000% : 5873.034us 00:08:17.835 10.00000% : 6099.889us 00:08:17.835 25.00000% : 6301.538us 00:08:17.835 50.00000% : 6604.012us 00:08:17.835 75.00000% : 7158.548us 00:08:17.835 90.00000% : 9679.163us 00:08:17.835 95.00000% : 10788.234us 00:08:17.835 98.00000% : 15022.868us 00:08:17.835 99.00000% : 18249.255us 00:08:17.835 99.50000% : 35893.563us 00:08:17.835 99.90000% : 44967.778us 00:08:17.835 99.99000% : 45169.428us 00:08:17.835 99.99900% : 45371.077us 00:08:17.835 99.99990% : 45371.077us 00:08:17.835 99.99999% : 45371.077us 00:08:17.835 00:08:17.835 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:17.835 ================================================================================= 00:08:17.835 1.00000% : 5847.828us 00:08:17.835 10.00000% : 6099.889us 00:08:17.835 25.00000% : 6276.332us 00:08:17.835 50.00000% : 6604.012us 00:08:17.835 75.00000% : 7108.135us 00:08:17.835 90.00000% : 9527.926us 00:08:17.835 95.00000% : 10788.234us 00:08:17.835 98.00000% : 15022.868us 00:08:17.835 99.00000% : 18249.255us 00:08:17.835 99.50000% : 35893.563us 00:08:17.835 99.90000% : 44967.778us 00:08:17.835 99.99000% : 45371.077us 00:08:17.835 99.99900% : 45371.077us 00:08:17.835 99.99990% : 45371.077us 00:08:17.835 99.99999% : 45371.077us 00:08:17.835 00:08:17.835 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:17.835 ================================================================================= 00:08:17.835 1.00000% : 5772.209us 00:08:17.835 10.00000% : 6024.271us 00:08:17.835 25.00000% : 6276.332us 00:08:17.835 50.00000% : 6604.012us 00:08:17.835 75.00000% : 7158.548us 00:08:17.835 90.00000% : 9477.514us 00:08:17.835 95.00000% : 11090.708us 00:08:17.835 98.00000% : 15829.465us 00:08:17.835 99.00000% : 18350.080us 00:08:17.835 99.50000% : 26214.400us 00:08:17.835 99.90000% : 36700.160us 00:08:17.835 99.99000% : 36901.809us 00:08:17.835 99.99900% : 36901.809us 00:08:17.835 99.99990% : 36901.809us 00:08:17.835 99.99999% : 36901.809us 00:08:17.835 00:08:17.835 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:17.835 ================================================================================= 00:08:17.835 1.00000% : 5797.415us 00:08:17.835 10.00000% : 6074.683us 00:08:17.835 25.00000% : 6276.332us 00:08:17.835 50.00000% : 6604.012us 00:08:17.835 75.00000% : 7158.548us 00:08:17.835 90.00000% : 9477.514us 00:08:17.835 95.00000% : 11342.769us 00:08:17.835 98.00000% : 15224.517us 00:08:17.835 99.00000% : 18551.729us 00:08:17.835 99.50000% : 25407.803us 00:08:17.835 99.90000% : 36296.862us 00:08:17.835 99.99000% : 36700.160us 00:08:17.835 99.99900% : 36700.160us 00:08:17.835 99.99990% : 36700.160us 00:08:17.835 99.99999% : 36700.160us 00:08:17.835 00:08:17.835 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:17.835 ================================================================================= 00:08:17.835 1.00000% : 5797.415us 00:08:17.835 10.00000% : 6099.889us 00:08:17.835 25.00000% : 6276.332us 00:08:17.835 50.00000% : 6604.012us 00:08:17.835 75.00000% : 7158.548us 00:08:17.835 90.00000% : 9628.751us 00:08:17.835 95.00000% : 11241.945us 00:08:17.835 98.00000% : 15123.692us 00:08:17.835 99.00000% : 18047.606us 00:08:17.835 99.50000% : 27424.295us 00:08:17.835 99.90000% : 37103.458us 00:08:17.835 99.99000% : 37305.108us 00:08:17.835 99.99900% : 37305.108us 00:08:17.835 99.99990% : 37305.108us 00:08:17.835 99.99999% : 37305.108us 00:08:17.835 00:08:17.835 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:17.835 ================================================================================= 00:08:17.835 1.00000% : 5822.622us 00:08:17.835 10.00000% : 6099.889us 00:08:17.835 25.00000% : 6276.332us 00:08:17.835 50.00000% : 6604.012us 00:08:17.835 75.00000% : 7158.548us 00:08:17.835 90.00000% : 9729.575us 00:08:17.835 95.00000% : 10889.058us 00:08:17.835 98.00000% : 14922.043us 00:08:17.835 99.00000% : 18047.606us 00:08:17.835 99.50000% : 27222.646us 00:08:17.835 99.90000% : 36700.160us 00:08:17.835 99.99000% : 36901.809us 00:08:17.835 99.99900% : 36901.809us 00:08:17.835 99.99990% : 36901.809us 00:08:17.835 99.99999% : 36901.809us 00:08:17.835 00:08:17.835 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:17.835 ============================================================================== 00:08:17.835 Range in us Cumulative IO count 00:08:17.835 4511.902 - 4537.108: 0.0116% ( 2) 00:08:17.835 4537.108 - 4562.314: 0.0231% ( 2) 00:08:17.835 4562.314 - 4587.520: 0.0405% ( 3) 00:08:17.835 4587.520 - 4612.726: 0.0521% ( 2) 00:08:17.835 4612.726 - 4637.932: 0.0637% ( 2) 00:08:17.835 4637.932 - 4663.138: 0.0868% ( 4) 00:08:17.835 4663.138 - 4688.345: 0.0984% ( 2) 00:08:17.835 4688.345 - 4713.551: 0.1100% ( 2) 00:08:17.835 4713.551 - 4738.757: 0.1273% ( 3) 00:08:17.835 4738.757 - 4763.963: 0.1389% ( 2) 00:08:17.835 4763.963 - 4789.169: 0.1562% ( 3) 00:08:17.835 4789.169 - 4814.375: 0.1678% ( 2) 00:08:17.835 4814.375 - 4839.582: 0.1794% ( 2) 00:08:17.835 4839.582 - 4864.788: 0.1910% ( 2) 00:08:17.835 4864.788 - 4889.994: 0.2025% ( 2) 00:08:17.835 4889.994 - 4915.200: 0.2141% ( 2) 00:08:17.835 4915.200 - 4940.406: 0.2257% ( 2) 00:08:17.835 4940.406 - 4965.612: 0.2373% ( 2) 00:08:17.835 4965.612 - 4990.818: 0.2488% ( 2) 00:08:17.835 4990.818 - 5016.025: 0.2604% ( 2) 00:08:17.835 5016.025 - 5041.231: 0.2720% ( 2) 00:08:17.835 5041.231 - 5066.437: 0.2836% ( 2) 00:08:17.835 5066.437 - 5091.643: 0.2951% ( 2) 00:08:17.835 5091.643 - 5116.849: 0.3067% ( 2) 00:08:17.835 5116.849 - 5142.055: 0.3241% ( 3) 00:08:17.836 5142.055 - 5167.262: 0.3356% ( 2) 00:08:17.836 5167.262 - 5192.468: 0.3472% ( 2) 00:08:17.836 5192.468 - 5217.674: 0.3646% ( 3) 00:08:17.836 5217.674 - 5242.880: 0.3704% ( 1) 00:08:17.836 5646.178 - 5671.385: 0.3762% ( 1) 00:08:17.836 5671.385 - 5696.591: 0.4167% ( 7) 00:08:17.836 5696.591 - 5721.797: 0.4514% ( 6) 00:08:17.836 5721.797 - 5747.003: 0.4919% ( 7) 00:08:17.836 5747.003 - 5772.209: 0.5382% ( 8) 00:08:17.836 5772.209 - 5797.415: 0.6424% ( 18) 00:08:17.836 5797.415 - 5822.622: 0.7986% ( 27) 00:08:17.836 5822.622 - 5847.828: 0.9838% ( 32) 00:08:17.836 5847.828 - 5873.034: 1.1921% ( 36) 00:08:17.836 5873.034 - 5898.240: 1.5509% ( 62) 00:08:17.836 5898.240 - 5923.446: 2.0370% ( 84) 00:08:17.836 5923.446 - 5948.652: 2.7431% ( 122) 00:08:17.836 5948.652 - 5973.858: 3.6748% ( 161) 00:08:17.836 5973.858 - 5999.065: 4.8727% ( 207) 00:08:17.836 5999.065 - 6024.271: 6.4525% ( 273) 00:08:17.836 6024.271 - 6049.477: 8.0035% ( 268) 00:08:17.836 6049.477 - 6074.683: 9.5602% ( 269) 00:08:17.836 6074.683 - 6099.889: 11.2616% ( 294) 00:08:17.836 6099.889 - 6125.095: 13.0035% ( 301) 00:08:17.836 6125.095 - 6150.302: 14.8958% ( 327) 00:08:17.836 6150.302 - 6175.508: 16.8229% ( 333) 00:08:17.836 6175.508 - 6200.714: 18.8889% ( 357) 00:08:17.836 6200.714 - 6225.920: 20.8681% ( 342) 00:08:17.836 6225.920 - 6251.126: 22.8472% ( 342) 00:08:17.836 6251.126 - 6276.332: 24.9248% ( 359) 00:08:17.836 6276.332 - 6301.538: 26.9676% ( 353) 00:08:17.836 6301.538 - 6326.745: 29.0625% ( 362) 00:08:17.836 6326.745 - 6351.951: 31.1111% ( 354) 00:08:17.836 6351.951 - 6377.157: 33.1655% ( 355) 00:08:17.836 6377.157 - 6402.363: 35.1910% ( 350) 00:08:17.836 6402.363 - 6427.569: 37.3264% ( 369) 00:08:17.836 6427.569 - 6452.775: 39.5139% ( 378) 00:08:17.836 6452.775 - 6503.188: 43.8831% ( 755) 00:08:17.836 6503.188 - 6553.600: 48.1366% ( 735) 00:08:17.836 6553.600 - 6604.012: 52.3495% ( 728) 00:08:17.836 6604.012 - 6654.425: 56.4757% ( 713) 00:08:17.836 6654.425 - 6704.837: 60.5671% ( 707) 00:08:17.836 6704.837 - 6755.249: 64.3634% ( 656) 00:08:17.836 6755.249 - 6805.662: 67.5579% ( 552) 00:08:17.836 6805.662 - 6856.074: 69.7049% ( 371) 00:08:17.836 6856.074 - 6906.486: 71.2269% ( 263) 00:08:17.836 6906.486 - 6956.898: 72.4306% ( 208) 00:08:17.836 6956.898 - 7007.311: 73.3912% ( 166) 00:08:17.836 7007.311 - 7057.723: 74.1551% ( 132) 00:08:17.836 7057.723 - 7108.135: 74.8553% ( 121) 00:08:17.836 7108.135 - 7158.548: 75.5093% ( 113) 00:08:17.836 7158.548 - 7208.960: 76.1458% ( 110) 00:08:17.836 7208.960 - 7259.372: 76.6667% ( 90) 00:08:17.836 7259.372 - 7309.785: 77.1123% ( 77) 00:08:17.836 7309.785 - 7360.197: 77.5405% ( 74) 00:08:17.836 7360.197 - 7410.609: 77.9688% ( 74) 00:08:17.836 7410.609 - 7461.022: 78.3565% ( 67) 00:08:17.836 7461.022 - 7511.434: 78.7211% ( 63) 00:08:17.836 7511.434 - 7561.846: 79.1319% ( 71) 00:08:17.836 7561.846 - 7612.258: 79.5312% ( 69) 00:08:17.836 7612.258 - 7662.671: 79.9016% ( 64) 00:08:17.836 7662.671 - 7713.083: 80.2373% ( 58) 00:08:17.836 7713.083 - 7763.495: 80.5440% ( 53) 00:08:17.836 7763.495 - 7813.908: 80.8623% ( 55) 00:08:17.836 7813.908 - 7864.320: 81.1979% ( 58) 00:08:17.836 7864.320 - 7914.732: 81.5278% ( 57) 00:08:17.836 7914.732 - 7965.145: 81.8576% ( 57) 00:08:17.836 7965.145 - 8015.557: 82.1586% ( 52) 00:08:17.836 8015.557 - 8065.969: 82.4421% ( 49) 00:08:17.836 8065.969 - 8116.382: 82.7373% ( 51) 00:08:17.836 8116.382 - 8166.794: 83.0035% ( 46) 00:08:17.836 8166.794 - 8217.206: 83.2581% ( 44) 00:08:17.836 8217.206 - 8267.618: 83.5069% ( 43) 00:08:17.836 8267.618 - 8318.031: 83.7500% ( 42) 00:08:17.836 8318.031 - 8368.443: 83.9815% ( 40) 00:08:17.836 8368.443 - 8418.855: 84.1956% ( 37) 00:08:17.836 8418.855 - 8469.268: 84.4560% ( 45) 00:08:17.836 8469.268 - 8519.680: 84.6933% ( 41) 00:08:17.836 8519.680 - 8570.092: 84.9537% ( 45) 00:08:17.836 8570.092 - 8620.505: 85.2488% ( 51) 00:08:17.836 8620.505 - 8670.917: 85.5440% ( 51) 00:08:17.836 8670.917 - 8721.329: 85.8333% ( 50) 00:08:17.836 8721.329 - 8771.742: 86.1053% ( 47) 00:08:17.836 8771.742 - 8822.154: 86.3715% ( 46) 00:08:17.836 8822.154 - 8872.566: 86.6840% ( 54) 00:08:17.836 8872.566 - 8922.978: 86.9329% ( 43) 00:08:17.836 8922.978 - 8973.391: 87.1991% ( 46) 00:08:17.836 8973.391 - 9023.803: 87.4479% ( 43) 00:08:17.836 9023.803 - 9074.215: 87.6968% ( 43) 00:08:17.836 9074.215 - 9124.628: 87.9225% ( 39) 00:08:17.836 9124.628 - 9175.040: 88.1481% ( 39) 00:08:17.836 9175.040 - 9225.452: 88.3796% ( 40) 00:08:17.836 9225.452 - 9275.865: 88.6285% ( 43) 00:08:17.836 9275.865 - 9326.277: 88.8600% ( 40) 00:08:17.836 9326.277 - 9376.689: 89.0799% ( 38) 00:08:17.836 9376.689 - 9427.102: 89.2477% ( 29) 00:08:17.836 9427.102 - 9477.514: 89.3981% ( 26) 00:08:17.836 9477.514 - 9527.926: 89.5486% ( 26) 00:08:17.836 9527.926 - 9578.338: 89.7396% ( 33) 00:08:17.836 9578.338 - 9628.751: 89.9306% ( 33) 00:08:17.836 9628.751 - 9679.163: 90.1562% ( 39) 00:08:17.836 9679.163 - 9729.575: 90.4051% ( 43) 00:08:17.836 9729.575 - 9779.988: 90.6424% ( 41) 00:08:17.836 9779.988 - 9830.400: 90.8796% ( 41) 00:08:17.836 9830.400 - 9880.812: 91.0880% ( 36) 00:08:17.836 9880.812 - 9931.225: 91.3426% ( 44) 00:08:17.836 9931.225 - 9981.637: 91.5741% ( 40) 00:08:17.836 9981.637 - 10032.049: 91.7882% ( 37) 00:08:17.836 10032.049 - 10082.462: 92.0602% ( 47) 00:08:17.836 10082.462 - 10132.874: 92.2975% ( 41) 00:08:17.836 10132.874 - 10183.286: 92.5637% ( 46) 00:08:17.836 10183.286 - 10233.698: 92.8125% ( 43) 00:08:17.836 10233.698 - 10284.111: 93.0671% ( 44) 00:08:17.836 10284.111 - 10334.523: 93.2986% ( 40) 00:08:17.836 10334.523 - 10384.935: 93.5475% ( 43) 00:08:17.836 10384.935 - 10435.348: 93.7616% ( 37) 00:08:17.836 10435.348 - 10485.760: 93.9815% ( 38) 00:08:17.836 10485.760 - 10536.172: 94.1609% ( 31) 00:08:17.836 10536.172 - 10586.585: 94.3634% ( 35) 00:08:17.836 10586.585 - 10636.997: 94.5660% ( 35) 00:08:17.836 10636.997 - 10687.409: 94.7280% ( 28) 00:08:17.836 10687.409 - 10737.822: 94.9306% ( 35) 00:08:17.836 10737.822 - 10788.234: 95.1042% ( 30) 00:08:17.836 10788.234 - 10838.646: 95.2488% ( 25) 00:08:17.836 10838.646 - 10889.058: 95.3993% ( 26) 00:08:17.836 10889.058 - 10939.471: 95.5440% ( 25) 00:08:17.836 10939.471 - 10989.883: 95.6597% ( 20) 00:08:17.836 10989.883 - 11040.295: 95.7697% ( 19) 00:08:17.836 11040.295 - 11090.708: 95.8623% ( 16) 00:08:17.836 11090.708 - 11141.120: 95.9491% ( 15) 00:08:17.836 11141.120 - 11191.532: 96.0127% ( 11) 00:08:17.836 11191.532 - 11241.945: 96.0648% ( 9) 00:08:17.836 11241.945 - 11292.357: 96.1053% ( 7) 00:08:17.836 11292.357 - 11342.769: 96.1458% ( 7) 00:08:17.836 11342.769 - 11393.182: 96.1863% ( 7) 00:08:17.836 11393.182 - 11443.594: 96.2037% ( 3) 00:08:17.836 11443.594 - 11494.006: 96.2211% ( 3) 00:08:17.836 11494.006 - 11544.418: 96.2384% ( 3) 00:08:17.836 11544.418 - 11594.831: 96.2500% ( 2) 00:08:17.836 11594.831 - 11645.243: 96.2674% ( 3) 00:08:17.836 11645.243 - 11695.655: 96.2789% ( 2) 00:08:17.836 11695.655 - 11746.068: 96.3194% ( 7) 00:08:17.836 11746.068 - 11796.480: 96.3542% ( 6) 00:08:17.836 11796.480 - 11846.892: 96.3947% ( 7) 00:08:17.836 11846.892 - 11897.305: 96.4352% ( 7) 00:08:17.836 11897.305 - 11947.717: 96.4815% ( 8) 00:08:17.836 11947.717 - 11998.129: 96.5220% ( 7) 00:08:17.836 11998.129 - 12048.542: 96.5914% ( 12) 00:08:17.836 12048.542 - 12098.954: 96.6551% ( 11) 00:08:17.836 12098.954 - 12149.366: 96.7188% ( 11) 00:08:17.836 12149.366 - 12199.778: 96.7824% ( 11) 00:08:17.836 12199.778 - 12250.191: 96.8461% ( 11) 00:08:17.836 12250.191 - 12300.603: 96.9155% ( 12) 00:08:17.836 12300.603 - 12351.015: 96.9850% ( 12) 00:08:17.836 12351.015 - 12401.428: 97.0428% ( 10) 00:08:17.836 12401.428 - 12451.840: 97.1007% ( 10) 00:08:17.836 12451.840 - 12502.252: 97.1701% ( 12) 00:08:17.836 12502.252 - 12552.665: 97.2280% ( 10) 00:08:17.836 12552.665 - 12603.077: 97.2685% ( 7) 00:08:17.836 12603.077 - 12653.489: 97.3032% ( 6) 00:08:17.836 12653.489 - 12703.902: 97.3264% ( 4) 00:08:17.836 12703.902 - 12754.314: 97.3438% ( 3) 00:08:17.836 12754.314 - 12804.726: 97.3669% ( 4) 00:08:17.836 12804.726 - 12855.138: 97.3843% ( 3) 00:08:17.836 12855.138 - 12905.551: 97.4016% ( 3) 00:08:17.836 12905.551 - 13006.375: 97.4074% ( 1) 00:08:17.836 13107.200 - 13208.025: 97.4363% ( 5) 00:08:17.836 13208.025 - 13308.849: 97.4653% ( 5) 00:08:17.836 13308.849 - 13409.674: 97.5116% ( 8) 00:08:17.836 13409.674 - 13510.498: 97.5463% ( 6) 00:08:17.836 13510.498 - 13611.323: 97.5752% ( 5) 00:08:17.836 13611.323 - 13712.148: 97.6100% ( 6) 00:08:17.836 13712.148 - 13812.972: 97.6331% ( 4) 00:08:17.836 13812.972 - 13913.797: 97.6678% ( 6) 00:08:17.836 13913.797 - 14014.622: 97.7025% ( 6) 00:08:17.836 14014.622 - 14115.446: 97.7373% ( 6) 00:08:17.836 14115.446 - 14216.271: 97.7662% ( 5) 00:08:17.836 14216.271 - 14317.095: 97.7778% ( 2) 00:08:17.836 14317.095 - 14417.920: 97.8241% ( 8) 00:08:17.836 14417.920 - 14518.745: 97.8472% ( 4) 00:08:17.836 14518.745 - 14619.569: 97.8762% ( 5) 00:08:17.836 14619.569 - 14720.394: 97.9051% ( 5) 00:08:17.836 14720.394 - 14821.218: 97.9398% ( 6) 00:08:17.836 14821.218 - 14922.043: 97.9745% ( 6) 00:08:17.836 14922.043 - 15022.868: 98.0093% ( 6) 00:08:17.837 15022.868 - 15123.692: 98.0382% ( 5) 00:08:17.837 15123.692 - 15224.517: 98.0729% ( 6) 00:08:17.837 15224.517 - 15325.342: 98.1481% ( 13) 00:08:17.837 15325.342 - 15426.166: 98.2118% ( 11) 00:08:17.837 15426.166 - 15526.991: 98.2697% ( 10) 00:08:17.837 15526.991 - 15627.815: 98.3160% ( 8) 00:08:17.837 15627.815 - 15728.640: 98.3738% ( 10) 00:08:17.837 15728.640 - 15829.465: 98.4086% ( 6) 00:08:17.837 15829.465 - 15930.289: 98.4433% ( 6) 00:08:17.837 15930.289 - 16031.114: 98.4664% ( 4) 00:08:17.837 16031.114 - 16131.938: 98.4780% ( 2) 00:08:17.837 16131.938 - 16232.763: 98.4896% ( 2) 00:08:17.837 16232.763 - 16333.588: 98.5127% ( 4) 00:08:17.837 16333.588 - 16434.412: 98.5185% ( 1) 00:08:17.837 17241.009 - 17341.834: 98.5301% ( 2) 00:08:17.837 17341.834 - 17442.658: 98.5475% ( 3) 00:08:17.837 17442.658 - 17543.483: 98.5880% ( 7) 00:08:17.837 17543.483 - 17644.308: 98.6343% ( 8) 00:08:17.837 17644.308 - 17745.132: 98.7095% ( 13) 00:08:17.837 17745.132 - 17845.957: 98.7789% ( 12) 00:08:17.837 17845.957 - 17946.782: 98.8310% ( 9) 00:08:17.837 17946.782 - 18047.606: 98.8947% ( 11) 00:08:17.837 18047.606 - 18148.431: 98.9641% ( 12) 00:08:17.837 18148.431 - 18249.255: 99.0278% ( 11) 00:08:17.837 18249.255 - 18350.080: 99.0914% ( 11) 00:08:17.837 18350.080 - 18450.905: 99.1551% ( 11) 00:08:17.837 18450.905 - 18551.729: 99.2188% ( 11) 00:08:17.837 18551.729 - 18652.554: 99.2535% ( 6) 00:08:17.837 18652.554 - 18753.378: 99.2593% ( 1) 00:08:17.837 34885.317 - 35086.966: 99.2766% ( 3) 00:08:17.837 35086.966 - 35288.615: 99.3403% ( 11) 00:08:17.837 35288.615 - 35490.265: 99.3981% ( 10) 00:08:17.837 35490.265 - 35691.914: 99.4618% ( 11) 00:08:17.837 35691.914 - 35893.563: 99.5255% ( 11) 00:08:17.837 35893.563 - 36095.212: 99.5891% ( 11) 00:08:17.837 36095.212 - 36296.862: 99.6296% ( 7) 00:08:17.837 43959.532 - 44161.182: 99.6759% ( 8) 00:08:17.837 44161.182 - 44362.831: 99.7396% ( 11) 00:08:17.837 44362.831 - 44564.480: 99.8090% ( 12) 00:08:17.837 44564.480 - 44766.129: 99.8727% ( 11) 00:08:17.837 44766.129 - 44967.778: 99.9363% ( 11) 00:08:17.837 44967.778 - 45169.428: 99.9942% ( 10) 00:08:17.837 45169.428 - 45371.077: 100.0000% ( 1) 00:08:17.837 00:08:17.837 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:17.837 ============================================================================== 00:08:17.837 Range in us Cumulative IO count 00:08:17.837 4310.252 - 4335.458: 0.0116% ( 2) 00:08:17.837 4335.458 - 4360.665: 0.0174% ( 1) 00:08:17.837 4360.665 - 4385.871: 0.0347% ( 3) 00:08:17.837 4385.871 - 4411.077: 0.0463% ( 2) 00:08:17.837 4411.077 - 4436.283: 0.0579% ( 2) 00:08:17.837 4436.283 - 4461.489: 0.0752% ( 3) 00:08:17.837 4461.489 - 4486.695: 0.0926% ( 3) 00:08:17.837 4486.695 - 4511.902: 0.1042% ( 2) 00:08:17.837 4511.902 - 4537.108: 0.1215% ( 3) 00:08:17.837 4537.108 - 4562.314: 0.1331% ( 2) 00:08:17.837 4562.314 - 4587.520: 0.1505% ( 3) 00:08:17.837 4587.520 - 4612.726: 0.1620% ( 2) 00:08:17.837 4612.726 - 4637.932: 0.1736% ( 2) 00:08:17.837 4637.932 - 4663.138: 0.1852% ( 2) 00:08:17.837 4663.138 - 4688.345: 0.1968% ( 2) 00:08:17.837 4688.345 - 4713.551: 0.2141% ( 3) 00:08:17.837 4713.551 - 4738.757: 0.2257% ( 2) 00:08:17.837 4738.757 - 4763.963: 0.2431% ( 3) 00:08:17.837 4763.963 - 4789.169: 0.2546% ( 2) 00:08:17.837 4789.169 - 4814.375: 0.2662% ( 2) 00:08:17.837 4814.375 - 4839.582: 0.2836% ( 3) 00:08:17.837 4839.582 - 4864.788: 0.2951% ( 2) 00:08:17.837 4864.788 - 4889.994: 0.3125% ( 3) 00:08:17.837 4889.994 - 4915.200: 0.3241% ( 2) 00:08:17.837 4915.200 - 4940.406: 0.3414% ( 3) 00:08:17.837 4940.406 - 4965.612: 0.3530% ( 2) 00:08:17.837 4965.612 - 4990.818: 0.3646% ( 2) 00:08:17.837 4990.818 - 5016.025: 0.3704% ( 1) 00:08:17.837 5620.972 - 5646.178: 0.3819% ( 2) 00:08:17.837 5646.178 - 5671.385: 0.4051% ( 4) 00:08:17.837 5671.385 - 5696.591: 0.4456% ( 7) 00:08:17.837 5696.591 - 5721.797: 0.4919% ( 8) 00:08:17.837 5721.797 - 5747.003: 0.5613% ( 12) 00:08:17.837 5747.003 - 5772.209: 0.6250% ( 11) 00:08:17.837 5772.209 - 5797.415: 0.7465% ( 21) 00:08:17.837 5797.415 - 5822.622: 0.8796% ( 23) 00:08:17.837 5822.622 - 5847.828: 1.0590% ( 31) 00:08:17.837 5847.828 - 5873.034: 1.3368% ( 48) 00:08:17.837 5873.034 - 5898.240: 1.7072% ( 64) 00:08:17.837 5898.240 - 5923.446: 2.2859% ( 100) 00:08:17.837 5923.446 - 5948.652: 3.0035% ( 124) 00:08:17.837 5948.652 - 5973.858: 4.0162% ( 175) 00:08:17.837 5973.858 - 5999.065: 5.4109% ( 241) 00:08:17.837 5999.065 - 6024.271: 6.7188% ( 226) 00:08:17.837 6024.271 - 6049.477: 8.1366% ( 245) 00:08:17.837 6049.477 - 6074.683: 9.6586% ( 263) 00:08:17.837 6074.683 - 6099.889: 11.3947% ( 300) 00:08:17.837 6099.889 - 6125.095: 13.1481% ( 303) 00:08:17.837 6125.095 - 6150.302: 15.0174% ( 323) 00:08:17.837 6150.302 - 6175.508: 17.0660% ( 354) 00:08:17.837 6175.508 - 6200.714: 19.0509% ( 343) 00:08:17.837 6200.714 - 6225.920: 21.0185% ( 340) 00:08:17.837 6225.920 - 6251.126: 23.0150% ( 345) 00:08:17.837 6251.126 - 6276.332: 25.0579% ( 353) 00:08:17.837 6276.332 - 6301.538: 27.1354% ( 359) 00:08:17.837 6301.538 - 6326.745: 29.2188% ( 360) 00:08:17.837 6326.745 - 6351.951: 31.2095% ( 344) 00:08:17.837 6351.951 - 6377.157: 33.2986% ( 361) 00:08:17.837 6377.157 - 6402.363: 35.3993% ( 363) 00:08:17.837 6402.363 - 6427.569: 37.5116% ( 365) 00:08:17.837 6427.569 - 6452.775: 39.6065% ( 362) 00:08:17.837 6452.775 - 6503.188: 43.8542% ( 734) 00:08:17.837 6503.188 - 6553.600: 48.1308% ( 739) 00:08:17.837 6553.600 - 6604.012: 52.4074% ( 739) 00:08:17.837 6604.012 - 6654.425: 56.5567% ( 717) 00:08:17.837 6654.425 - 6704.837: 60.5382% ( 688) 00:08:17.837 6704.837 - 6755.249: 64.2708% ( 645) 00:08:17.837 6755.249 - 6805.662: 67.3206% ( 527) 00:08:17.837 6805.662 - 6856.074: 69.5486% ( 385) 00:08:17.837 6856.074 - 6906.486: 71.2731% ( 298) 00:08:17.837 6906.486 - 6956.898: 72.6042% ( 230) 00:08:17.837 6956.898 - 7007.311: 73.6343% ( 178) 00:08:17.837 7007.311 - 7057.723: 74.4213% ( 136) 00:08:17.837 7057.723 - 7108.135: 75.1620% ( 128) 00:08:17.837 7108.135 - 7158.548: 75.8738% ( 123) 00:08:17.837 7158.548 - 7208.960: 76.4931% ( 107) 00:08:17.837 7208.960 - 7259.372: 77.0370% ( 94) 00:08:17.837 7259.372 - 7309.785: 77.4826% ( 77) 00:08:17.837 7309.785 - 7360.197: 77.8935% ( 71) 00:08:17.837 7360.197 - 7410.609: 78.2928% ( 69) 00:08:17.837 7410.609 - 7461.022: 78.7095% ( 72) 00:08:17.837 7461.022 - 7511.434: 79.0394% ( 57) 00:08:17.837 7511.434 - 7561.846: 79.4097% ( 64) 00:08:17.837 7561.846 - 7612.258: 79.7743% ( 63) 00:08:17.837 7612.258 - 7662.671: 80.0405% ( 46) 00:08:17.837 7662.671 - 7713.083: 80.3356% ( 51) 00:08:17.837 7713.083 - 7763.495: 80.6019% ( 46) 00:08:17.837 7763.495 - 7813.908: 80.9144% ( 54) 00:08:17.837 7813.908 - 7864.320: 81.2153% ( 52) 00:08:17.837 7864.320 - 7914.732: 81.4931% ( 48) 00:08:17.837 7914.732 - 7965.145: 81.6898% ( 34) 00:08:17.837 7965.145 - 8015.557: 81.9097% ( 38) 00:08:17.837 8015.557 - 8065.969: 82.1586% ( 43) 00:08:17.837 8065.969 - 8116.382: 82.3669% ( 36) 00:08:17.837 8116.382 - 8166.794: 82.5868% ( 38) 00:08:17.837 8166.794 - 8217.206: 82.8762% ( 50) 00:08:17.837 8217.206 - 8267.618: 83.1019% ( 39) 00:08:17.837 8267.618 - 8318.031: 83.2986% ( 34) 00:08:17.837 8318.031 - 8368.443: 83.5475% ( 43) 00:08:17.837 8368.443 - 8418.855: 83.7558% ( 36) 00:08:17.837 8418.855 - 8469.268: 84.0162% ( 45) 00:08:17.837 8469.268 - 8519.680: 84.2477% ( 40) 00:08:17.837 8519.680 - 8570.092: 84.5197% ( 47) 00:08:17.837 8570.092 - 8620.505: 84.8148% ( 51) 00:08:17.837 8620.505 - 8670.917: 85.1331% ( 55) 00:08:17.837 8670.917 - 8721.329: 85.4340% ( 52) 00:08:17.837 8721.329 - 8771.742: 85.7697% ( 58) 00:08:17.837 8771.742 - 8822.154: 86.0417% ( 47) 00:08:17.837 8822.154 - 8872.566: 86.2905% ( 43) 00:08:17.837 8872.566 - 8922.978: 86.6030% ( 54) 00:08:17.837 8922.978 - 8973.391: 86.8808% ( 48) 00:08:17.837 8973.391 - 9023.803: 87.1933% ( 54) 00:08:17.837 9023.803 - 9074.215: 87.4942% ( 52) 00:08:17.837 9074.215 - 9124.628: 87.8067% ( 54) 00:08:17.837 9124.628 - 9175.040: 88.1134% ( 53) 00:08:17.837 9175.040 - 9225.452: 88.4317% ( 55) 00:08:17.837 9225.452 - 9275.865: 88.7153% ( 49) 00:08:17.837 9275.865 - 9326.277: 89.0046% ( 50) 00:08:17.837 9326.277 - 9376.689: 89.2824% ( 48) 00:08:17.837 9376.689 - 9427.102: 89.5891% ( 53) 00:08:17.837 9427.102 - 9477.514: 89.9016% ( 54) 00:08:17.837 9477.514 - 9527.926: 90.1736% ( 47) 00:08:17.837 9527.926 - 9578.338: 90.4167% ( 42) 00:08:17.837 9578.338 - 9628.751: 90.6771% ( 45) 00:08:17.837 9628.751 - 9679.163: 90.9375% ( 45) 00:08:17.837 9679.163 - 9729.575: 91.1748% ( 41) 00:08:17.837 9729.575 - 9779.988: 91.4410% ( 46) 00:08:17.837 9779.988 - 9830.400: 91.6956% ( 44) 00:08:17.837 9830.400 - 9880.812: 91.9213% ( 39) 00:08:17.837 9880.812 - 9931.225: 92.1354% ( 37) 00:08:17.837 9931.225 - 9981.637: 92.2859% ( 26) 00:08:17.837 9981.637 - 10032.049: 92.4479% ( 28) 00:08:17.837 10032.049 - 10082.462: 92.6505% ( 35) 00:08:17.837 10082.462 - 10132.874: 92.8646% ( 37) 00:08:17.837 10132.874 - 10183.286: 93.0556% ( 33) 00:08:17.837 10183.286 - 10233.698: 93.2870% ( 40) 00:08:17.837 10233.698 - 10284.111: 93.5069% ( 38) 00:08:17.837 10284.111 - 10334.523: 93.7037% ( 34) 00:08:17.838 10334.523 - 10384.935: 93.9062% ( 35) 00:08:17.838 10384.935 - 10435.348: 94.0856% ( 31) 00:08:17.838 10435.348 - 10485.760: 94.2361% ( 26) 00:08:17.838 10485.760 - 10536.172: 94.3808% ( 25) 00:08:17.838 10536.172 - 10586.585: 94.5312% ( 26) 00:08:17.838 10586.585 - 10636.997: 94.6586% ( 22) 00:08:17.838 10636.997 - 10687.409: 94.7801% ( 21) 00:08:17.838 10687.409 - 10737.822: 94.9132% ( 23) 00:08:17.838 10737.822 - 10788.234: 95.0289% ( 20) 00:08:17.838 10788.234 - 10838.646: 95.1562% ( 22) 00:08:17.838 10838.646 - 10889.058: 95.2778% ( 21) 00:08:17.838 10889.058 - 10939.471: 95.3993% ( 21) 00:08:17.838 10939.471 - 10989.883: 95.5208% ( 21) 00:08:17.838 10989.883 - 11040.295: 95.5961% ( 13) 00:08:17.838 11040.295 - 11090.708: 95.6713% ( 13) 00:08:17.838 11090.708 - 11141.120: 95.7407% ( 12) 00:08:17.838 11141.120 - 11191.532: 95.8218% ( 14) 00:08:17.838 11191.532 - 11241.945: 95.8796% ( 10) 00:08:17.838 11241.945 - 11292.357: 95.9433% ( 11) 00:08:17.838 11292.357 - 11342.769: 96.0069% ( 11) 00:08:17.838 11342.769 - 11393.182: 96.0648% ( 10) 00:08:17.838 11393.182 - 11443.594: 96.1053% ( 7) 00:08:17.838 11443.594 - 11494.006: 96.1458% ( 7) 00:08:17.838 11494.006 - 11544.418: 96.1863% ( 7) 00:08:17.838 11544.418 - 11594.831: 96.2095% ( 4) 00:08:17.838 11594.831 - 11645.243: 96.2326% ( 4) 00:08:17.838 11645.243 - 11695.655: 96.2674% ( 6) 00:08:17.838 11695.655 - 11746.068: 96.3137% ( 8) 00:08:17.838 11746.068 - 11796.480: 96.3600% ( 8) 00:08:17.838 11796.480 - 11846.892: 96.3773% ( 3) 00:08:17.838 11846.892 - 11897.305: 96.4062% ( 5) 00:08:17.838 11897.305 - 11947.717: 96.4236% ( 3) 00:08:17.838 11947.717 - 11998.129: 96.4525% ( 5) 00:08:17.838 11998.129 - 12048.542: 96.4988% ( 8) 00:08:17.838 12048.542 - 12098.954: 96.5741% ( 13) 00:08:17.838 12098.954 - 12149.366: 96.6262% ( 9) 00:08:17.838 12149.366 - 12199.778: 96.6898% ( 11) 00:08:17.838 12199.778 - 12250.191: 96.7477% ( 10) 00:08:17.838 12250.191 - 12300.603: 96.8056% ( 10) 00:08:17.838 12300.603 - 12351.015: 96.8634% ( 10) 00:08:17.838 12351.015 - 12401.428: 96.9213% ( 10) 00:08:17.838 12401.428 - 12451.840: 96.9850% ( 11) 00:08:17.838 12451.840 - 12502.252: 97.0428% ( 10) 00:08:17.838 12502.252 - 12552.665: 97.1065% ( 11) 00:08:17.838 12552.665 - 12603.077: 97.1528% ( 8) 00:08:17.838 12603.077 - 12653.489: 97.2222% ( 12) 00:08:17.838 12653.489 - 12703.902: 97.2685% ( 8) 00:08:17.838 12703.902 - 12754.314: 97.3206% ( 9) 00:08:17.838 12754.314 - 12804.726: 97.3785% ( 10) 00:08:17.838 12804.726 - 12855.138: 97.4306% ( 9) 00:08:17.838 12855.138 - 12905.551: 97.4711% ( 7) 00:08:17.838 12905.551 - 13006.375: 97.5405% ( 12) 00:08:17.838 13006.375 - 13107.200: 97.5694% ( 5) 00:08:17.838 13107.200 - 13208.025: 97.5984% ( 5) 00:08:17.838 13208.025 - 13308.849: 97.6331% ( 6) 00:08:17.838 13308.849 - 13409.674: 97.6620% ( 5) 00:08:17.838 13409.674 - 13510.498: 97.6968% ( 6) 00:08:17.838 13510.498 - 13611.323: 97.7315% ( 6) 00:08:17.838 13611.323 - 13712.148: 97.7604% ( 5) 00:08:17.838 13712.148 - 13812.972: 97.7778% ( 3) 00:08:17.838 14417.920 - 14518.745: 97.8009% ( 4) 00:08:17.838 14518.745 - 14619.569: 97.8877% ( 15) 00:08:17.838 14619.569 - 14720.394: 97.9109% ( 4) 00:08:17.838 14720.394 - 14821.218: 97.9282% ( 3) 00:08:17.838 14821.218 - 14922.043: 97.9977% ( 12) 00:08:17.838 14922.043 - 15022.868: 98.1192% ( 21) 00:08:17.838 15022.868 - 15123.692: 98.1944% ( 13) 00:08:17.838 15123.692 - 15224.517: 98.2523% ( 10) 00:08:17.838 15224.517 - 15325.342: 98.3044% ( 9) 00:08:17.838 15325.342 - 15426.166: 98.3391% ( 6) 00:08:17.838 15426.166 - 15526.991: 98.3854% ( 8) 00:08:17.838 15526.991 - 15627.815: 98.4086% ( 4) 00:08:17.838 15627.815 - 15728.640: 98.4317% ( 4) 00:08:17.838 15728.640 - 15829.465: 98.4491% ( 3) 00:08:17.838 15829.465 - 15930.289: 98.4664% ( 3) 00:08:17.838 15930.289 - 16031.114: 98.4896% ( 4) 00:08:17.838 16031.114 - 16131.938: 98.4954% ( 1) 00:08:17.838 16131.938 - 16232.763: 98.5127% ( 3) 00:08:17.838 16232.763 - 16333.588: 98.5185% ( 1) 00:08:17.838 17241.009 - 17341.834: 98.5475% ( 5) 00:08:17.838 17341.834 - 17442.658: 98.5880% ( 7) 00:08:17.838 17442.658 - 17543.483: 98.6227% ( 6) 00:08:17.838 17543.483 - 17644.308: 98.6806% ( 10) 00:08:17.838 17644.308 - 17745.132: 98.7442% ( 11) 00:08:17.838 17745.132 - 17845.957: 98.7905% ( 8) 00:08:17.838 17845.957 - 17946.782: 98.8426% ( 9) 00:08:17.838 17946.782 - 18047.606: 98.8947% ( 9) 00:08:17.838 18047.606 - 18148.431: 98.9468% ( 9) 00:08:17.838 18148.431 - 18249.255: 99.0104% ( 11) 00:08:17.838 18249.255 - 18350.080: 99.0683% ( 10) 00:08:17.838 18350.080 - 18450.905: 99.1204% ( 9) 00:08:17.838 18450.905 - 18551.729: 99.1667% ( 8) 00:08:17.838 18551.729 - 18652.554: 99.1898% ( 4) 00:08:17.838 18652.554 - 18753.378: 99.2188% ( 5) 00:08:17.838 18753.378 - 18854.203: 99.2419% ( 4) 00:08:17.838 18854.203 - 18955.028: 99.2593% ( 3) 00:08:17.838 34885.317 - 35086.966: 99.2998% ( 7) 00:08:17.838 35086.966 - 35288.615: 99.3461% ( 8) 00:08:17.838 35288.615 - 35490.265: 99.4097% ( 11) 00:08:17.838 35490.265 - 35691.914: 99.4676% ( 10) 00:08:17.838 35691.914 - 35893.563: 99.5312% ( 11) 00:08:17.838 35893.563 - 36095.212: 99.5949% ( 11) 00:08:17.838 36095.212 - 36296.862: 99.6296% ( 6) 00:08:17.838 43959.532 - 44161.182: 99.6817% ( 9) 00:08:17.838 44161.182 - 44362.831: 99.7396% ( 10) 00:08:17.838 44362.831 - 44564.480: 99.7975% ( 10) 00:08:17.838 44564.480 - 44766.129: 99.8438% ( 8) 00:08:17.838 44766.129 - 44967.778: 99.9074% ( 11) 00:08:17.838 44967.778 - 45169.428: 99.9653% ( 10) 00:08:17.838 45169.428 - 45371.077: 100.0000% ( 6) 00:08:17.838 00:08:17.838 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:17.838 ============================================================================== 00:08:17.838 Range in us Cumulative IO count 00:08:17.838 4058.191 - 4083.397: 0.0173% ( 3) 00:08:17.838 4083.397 - 4108.603: 0.0231% ( 1) 00:08:17.838 4108.603 - 4133.809: 0.0404% ( 3) 00:08:17.838 4133.809 - 4159.015: 0.0519% ( 2) 00:08:17.838 4159.015 - 4184.222: 0.0634% ( 2) 00:08:17.838 4184.222 - 4209.428: 0.0750% ( 2) 00:08:17.838 4209.428 - 4234.634: 0.0865% ( 2) 00:08:17.838 4234.634 - 4259.840: 0.0923% ( 1) 00:08:17.838 4259.840 - 4285.046: 0.1095% ( 3) 00:08:17.838 4285.046 - 4310.252: 0.1211% ( 2) 00:08:17.838 4310.252 - 4335.458: 0.1326% ( 2) 00:08:17.838 4335.458 - 4360.665: 0.1441% ( 2) 00:08:17.838 4360.665 - 4385.871: 0.1557% ( 2) 00:08:17.838 4385.871 - 4411.077: 0.1672% ( 2) 00:08:17.838 4411.077 - 4436.283: 0.1787% ( 2) 00:08:17.838 4436.283 - 4461.489: 0.1845% ( 1) 00:08:17.838 4461.489 - 4486.695: 0.2018% ( 3) 00:08:17.838 4486.695 - 4511.902: 0.2076% ( 1) 00:08:17.838 4511.902 - 4537.108: 0.2249% ( 3) 00:08:17.838 4537.108 - 4562.314: 0.2364% ( 2) 00:08:17.838 4562.314 - 4587.520: 0.2479% ( 2) 00:08:17.838 4587.520 - 4612.726: 0.2595% ( 2) 00:08:17.838 4612.726 - 4637.932: 0.2710% ( 2) 00:08:17.838 4637.932 - 4663.138: 0.2825% ( 2) 00:08:17.838 4663.138 - 4688.345: 0.2940% ( 2) 00:08:17.838 4688.345 - 4713.551: 0.3056% ( 2) 00:08:17.838 4713.551 - 4738.757: 0.3229% ( 3) 00:08:17.838 4738.757 - 4763.963: 0.3286% ( 1) 00:08:17.838 4763.963 - 4789.169: 0.3402% ( 2) 00:08:17.838 4789.169 - 4814.375: 0.3517% ( 2) 00:08:17.838 4814.375 - 4839.582: 0.3632% ( 2) 00:08:17.838 4839.582 - 4864.788: 0.3690% ( 1) 00:08:17.838 5545.354 - 5570.560: 0.3748% ( 1) 00:08:17.838 5570.560 - 5595.766: 0.4036% ( 5) 00:08:17.838 5595.766 - 5620.972: 0.4497% ( 8) 00:08:17.838 5620.972 - 5646.178: 0.4901% ( 7) 00:08:17.838 5646.178 - 5671.385: 0.5477% ( 10) 00:08:17.839 5671.385 - 5696.591: 0.6227% ( 13) 00:08:17.839 5696.591 - 5721.797: 0.7380% ( 20) 00:08:17.839 5721.797 - 5747.003: 0.8764% ( 24) 00:08:17.839 5747.003 - 5772.209: 1.1012% ( 39) 00:08:17.839 5772.209 - 5797.415: 1.3722% ( 47) 00:08:17.839 5797.415 - 5822.622: 1.7528% ( 66) 00:08:17.839 5822.622 - 5847.828: 2.3639% ( 106) 00:08:17.839 5847.828 - 5873.034: 3.0500% ( 119) 00:08:17.839 5873.034 - 5898.240: 3.9495% ( 156) 00:08:17.839 5898.240 - 5923.446: 5.0334% ( 188) 00:08:17.839 5923.446 - 5948.652: 6.1577% ( 195) 00:08:17.839 5948.652 - 5973.858: 7.4954% ( 232) 00:08:17.839 5973.858 - 5999.065: 8.7811% ( 223) 00:08:17.839 5999.065 - 6024.271: 10.2399% ( 253) 00:08:17.839 6024.271 - 6049.477: 11.7274% ( 258) 00:08:17.839 6049.477 - 6074.683: 13.3072% ( 274) 00:08:17.839 6074.683 - 6099.889: 14.9389% ( 283) 00:08:17.839 6099.889 - 6125.095: 16.6052% ( 289) 00:08:17.839 6125.095 - 6150.302: 18.2023% ( 277) 00:08:17.839 6150.302 - 6175.508: 19.8512% ( 286) 00:08:17.839 6175.508 - 6200.714: 21.5982% ( 303) 00:08:17.839 6200.714 - 6225.920: 23.3452% ( 303) 00:08:17.839 6225.920 - 6251.126: 24.9769% ( 283) 00:08:17.839 6251.126 - 6276.332: 26.7297% ( 304) 00:08:17.839 6276.332 - 6301.538: 28.4536% ( 299) 00:08:17.839 6301.538 - 6326.745: 30.2064% ( 304) 00:08:17.839 6326.745 - 6351.951: 31.8900% ( 292) 00:08:17.839 6351.951 - 6377.157: 33.6774% ( 310) 00:08:17.839 6377.157 - 6402.363: 35.4474% ( 307) 00:08:17.839 6402.363 - 6427.569: 37.2809% ( 318) 00:08:17.839 6427.569 - 6452.775: 39.2066% ( 334) 00:08:17.839 6452.775 - 6503.188: 42.8736% ( 636) 00:08:17.839 6503.188 - 6553.600: 46.5464% ( 637) 00:08:17.839 6553.600 - 6604.012: 50.2133% ( 636) 00:08:17.839 6604.012 - 6654.425: 53.8226% ( 626) 00:08:17.839 6654.425 - 6704.837: 57.3974% ( 620) 00:08:17.839 6704.837 - 6755.249: 60.9433% ( 615) 00:08:17.839 6755.249 - 6805.662: 64.3450% ( 590) 00:08:17.839 6805.662 - 6856.074: 67.3662% ( 524) 00:08:17.839 6856.074 - 6906.486: 69.6033% ( 388) 00:08:17.839 6906.486 - 6956.898: 71.4368% ( 318) 00:08:17.839 6956.898 - 7007.311: 72.7975% ( 236) 00:08:17.839 7007.311 - 7057.723: 73.8699% ( 186) 00:08:17.839 7057.723 - 7108.135: 74.7117% ( 146) 00:08:17.839 7108.135 - 7158.548: 75.4094% ( 121) 00:08:17.839 7158.548 - 7208.960: 76.0378% ( 109) 00:08:17.839 7208.960 - 7259.372: 76.5856% ( 95) 00:08:17.839 7259.372 - 7309.785: 77.1448% ( 97) 00:08:17.839 7309.785 - 7360.197: 77.6753% ( 92) 00:08:17.839 7360.197 - 7410.609: 78.1538% ( 83) 00:08:17.839 7410.609 - 7461.022: 78.5401% ( 67) 00:08:17.839 7461.022 - 7511.434: 78.9034% ( 63) 00:08:17.839 7511.434 - 7561.846: 79.2262% ( 56) 00:08:17.839 7561.846 - 7612.258: 79.5434% ( 55) 00:08:17.839 7612.258 - 7662.671: 79.8662% ( 56) 00:08:17.839 7662.671 - 7713.083: 80.1315% ( 46) 00:08:17.839 7713.083 - 7763.495: 80.4255% ( 51) 00:08:17.839 7763.495 - 7813.908: 80.7023% ( 48) 00:08:17.839 7813.908 - 7864.320: 80.9271% ( 39) 00:08:17.839 7864.320 - 7914.732: 81.1520% ( 39) 00:08:17.839 7914.732 - 7965.145: 81.3884% ( 41) 00:08:17.839 7965.145 - 8015.557: 81.6305% ( 42) 00:08:17.839 8015.557 - 8065.969: 81.8496% ( 38) 00:08:17.839 8065.969 - 8116.382: 82.0745% ( 39) 00:08:17.839 8116.382 - 8166.794: 82.2821% ( 36) 00:08:17.839 8166.794 - 8217.206: 82.5242% ( 42) 00:08:17.839 8217.206 - 8267.618: 82.7202% ( 34) 00:08:17.839 8267.618 - 8318.031: 82.9220% ( 35) 00:08:17.839 8318.031 - 8368.443: 83.1181% ( 34) 00:08:17.839 8368.443 - 8418.855: 83.3026% ( 32) 00:08:17.839 8418.855 - 8469.268: 83.5390% ( 41) 00:08:17.839 8469.268 - 8519.680: 83.7927% ( 44) 00:08:17.839 8519.680 - 8570.092: 84.0521% ( 45) 00:08:17.839 8570.092 - 8620.505: 84.3231% ( 47) 00:08:17.839 8620.505 - 8670.917: 84.6287% ( 53) 00:08:17.839 8670.917 - 8721.329: 84.9689% ( 59) 00:08:17.839 8721.329 - 8771.742: 85.2399% ( 47) 00:08:17.839 8771.742 - 8822.154: 85.5800% ( 59) 00:08:17.839 8822.154 - 8872.566: 85.8971% ( 55) 00:08:17.839 8872.566 - 8922.978: 86.2373% ( 59) 00:08:17.839 8922.978 - 8973.391: 86.5833% ( 60) 00:08:17.839 8973.391 - 9023.803: 86.9350% ( 61) 00:08:17.839 9023.803 - 9074.215: 87.2751% ( 59) 00:08:17.839 9074.215 - 9124.628: 87.6326% ( 62) 00:08:17.839 9124.628 - 9175.040: 87.9901% ( 62) 00:08:17.839 9175.040 - 9225.452: 88.3533% ( 63) 00:08:17.839 9225.452 - 9275.865: 88.6877% ( 58) 00:08:17.839 9275.865 - 9326.277: 89.0106% ( 56) 00:08:17.839 9326.277 - 9376.689: 89.3450% ( 58) 00:08:17.839 9376.689 - 9427.102: 89.7371% ( 68) 00:08:17.839 9427.102 - 9477.514: 90.0946% ( 62) 00:08:17.839 9477.514 - 9527.926: 90.3598% ( 46) 00:08:17.839 9527.926 - 9578.338: 90.6654% ( 53) 00:08:17.839 9578.338 - 9628.751: 90.8960% ( 40) 00:08:17.839 9628.751 - 9679.163: 91.0863% ( 33) 00:08:17.839 9679.163 - 9729.575: 91.3054% ( 38) 00:08:17.839 9729.575 - 9779.988: 91.5071% ( 35) 00:08:17.839 9779.988 - 9830.400: 91.7147% ( 36) 00:08:17.839 9830.400 - 9880.812: 91.9107% ( 34) 00:08:17.839 9880.812 - 9931.225: 92.1010% ( 33) 00:08:17.839 9931.225 - 9981.637: 92.2798% ( 31) 00:08:17.839 9981.637 - 10032.049: 92.4643% ( 32) 00:08:17.839 10032.049 - 10082.462: 92.6199% ( 27) 00:08:17.839 10082.462 - 10132.874: 92.7641% ( 25) 00:08:17.839 10132.874 - 10183.286: 92.9197% ( 27) 00:08:17.839 10183.286 - 10233.698: 93.0639% ( 25) 00:08:17.839 10233.698 - 10284.111: 93.2080% ( 25) 00:08:17.839 10284.111 - 10334.523: 93.3464% ( 24) 00:08:17.839 10334.523 - 10384.935: 93.4963% ( 26) 00:08:17.839 10384.935 - 10435.348: 93.6059% ( 19) 00:08:17.839 10435.348 - 10485.760: 93.7442% ( 24) 00:08:17.839 10485.760 - 10536.172: 93.8595% ( 20) 00:08:17.839 10536.172 - 10586.585: 93.9864% ( 22) 00:08:17.839 10586.585 - 10636.997: 94.1017% ( 20) 00:08:17.840 10636.997 - 10687.409: 94.1767% ( 13) 00:08:17.840 10687.409 - 10737.822: 94.2401% ( 11) 00:08:17.840 10737.822 - 10788.234: 94.3208% ( 14) 00:08:17.840 10788.234 - 10838.646: 94.4649% ( 25) 00:08:17.840 10838.646 - 10889.058: 94.5860% ( 21) 00:08:17.840 10889.058 - 10939.471: 94.7071% ( 21) 00:08:17.840 10939.471 - 10989.883: 94.8167% ( 19) 00:08:17.840 10989.883 - 11040.295: 94.9147% ( 17) 00:08:17.840 11040.295 - 11090.708: 95.0069% ( 16) 00:08:17.840 11090.708 - 11141.120: 95.0934% ( 15) 00:08:17.840 11141.120 - 11191.532: 95.1799% ( 15) 00:08:17.840 11191.532 - 11241.945: 95.2664% ( 15) 00:08:17.840 11241.945 - 11292.357: 95.3356% ( 12) 00:08:17.840 11292.357 - 11342.769: 95.4048% ( 12) 00:08:17.840 11342.769 - 11393.182: 95.4624% ( 10) 00:08:17.840 11393.182 - 11443.594: 95.5258% ( 11) 00:08:17.840 11443.594 - 11494.006: 95.5950% ( 12) 00:08:17.840 11494.006 - 11544.418: 95.6642% ( 12) 00:08:17.840 11544.418 - 11594.831: 95.7334% ( 12) 00:08:17.840 11594.831 - 11645.243: 95.8026% ( 12) 00:08:17.840 11645.243 - 11695.655: 95.8891% ( 15) 00:08:17.840 11695.655 - 11746.068: 95.9871% ( 17) 00:08:17.840 11746.068 - 11796.480: 96.0678% ( 14) 00:08:17.840 11796.480 - 11846.892: 96.1139% ( 8) 00:08:17.840 11846.892 - 11897.305: 96.1485% ( 6) 00:08:17.840 11897.305 - 11947.717: 96.2119% ( 11) 00:08:17.840 11947.717 - 11998.129: 96.2581% ( 8) 00:08:17.840 11998.129 - 12048.542: 96.3388% ( 14) 00:08:17.840 12048.542 - 12098.954: 96.4022% ( 11) 00:08:17.840 12098.954 - 12149.366: 96.4541% ( 9) 00:08:17.840 12149.366 - 12199.778: 96.5464% ( 16) 00:08:17.840 12199.778 - 12250.191: 96.6098% ( 11) 00:08:17.840 12250.191 - 12300.603: 96.6732% ( 11) 00:08:17.840 12300.603 - 12351.015: 96.7309% ( 10) 00:08:17.840 12351.015 - 12401.428: 96.7885% ( 10) 00:08:17.840 12401.428 - 12451.840: 96.8519% ( 11) 00:08:17.840 12451.840 - 12502.252: 96.9154% ( 11) 00:08:17.840 12502.252 - 12552.665: 96.9788% ( 11) 00:08:17.840 12552.665 - 12603.077: 97.0364% ( 10) 00:08:17.840 12603.077 - 12653.489: 97.0883% ( 9) 00:08:17.840 12653.489 - 12703.902: 97.1460% ( 10) 00:08:17.840 12703.902 - 12754.314: 97.1921% ( 8) 00:08:17.840 12754.314 - 12804.726: 97.2209% ( 5) 00:08:17.840 12804.726 - 12855.138: 97.2498% ( 5) 00:08:17.840 12855.138 - 12905.551: 97.2786% ( 5) 00:08:17.840 12905.551 - 13006.375: 97.3190% ( 7) 00:08:17.840 13006.375 - 13107.200: 97.3478% ( 5) 00:08:17.840 13107.200 - 13208.025: 97.3651% ( 3) 00:08:17.840 13208.025 - 13308.849: 97.3939% ( 5) 00:08:17.840 13308.849 - 13409.674: 97.4170% ( 4) 00:08:17.840 14317.095 - 14417.920: 97.4285% ( 2) 00:08:17.840 14417.920 - 14518.745: 97.4516% ( 4) 00:08:17.840 14518.745 - 14619.569: 97.5092% ( 10) 00:08:17.840 14619.569 - 14720.394: 97.5381% ( 5) 00:08:17.840 14720.394 - 14821.218: 97.5899% ( 9) 00:08:17.840 14821.218 - 14922.043: 97.6418% ( 9) 00:08:17.840 14922.043 - 15022.868: 97.6880% ( 8) 00:08:17.840 15022.868 - 15123.692: 97.7226% ( 6) 00:08:17.840 15123.692 - 15224.517: 97.7571% ( 6) 00:08:17.840 15224.517 - 15325.342: 97.8321% ( 13) 00:08:17.840 15325.342 - 15426.166: 97.8667% ( 6) 00:08:17.840 15426.166 - 15526.991: 97.8955% ( 5) 00:08:17.840 15526.991 - 15627.815: 97.9474% ( 9) 00:08:17.840 15627.815 - 15728.640: 97.9820% ( 6) 00:08:17.840 15728.640 - 15829.465: 98.0281% ( 8) 00:08:17.840 15829.465 - 15930.289: 98.0512% ( 4) 00:08:17.840 15930.289 - 16031.114: 98.0685% ( 3) 00:08:17.840 16031.114 - 16131.938: 98.0858% ( 3) 00:08:17.840 16131.938 - 16232.763: 98.1031% ( 3) 00:08:17.840 16232.763 - 16333.588: 98.1262% ( 4) 00:08:17.840 16333.588 - 16434.412: 98.1435% ( 3) 00:08:17.840 16434.412 - 16535.237: 98.1550% ( 2) 00:08:17.840 16535.237 - 16636.062: 98.1723% ( 3) 00:08:17.840 16636.062 - 16736.886: 98.2126% ( 7) 00:08:17.840 16736.886 - 16837.711: 98.2242% ( 2) 00:08:17.840 16837.711 - 16938.535: 98.2645% ( 7) 00:08:17.840 16938.535 - 17039.360: 98.2703% ( 1) 00:08:17.840 17039.360 - 17140.185: 98.3107% ( 7) 00:08:17.840 17140.185 - 17241.009: 98.3568% ( 8) 00:08:17.840 17241.009 - 17341.834: 98.4087% ( 9) 00:08:17.840 17341.834 - 17442.658: 98.4836% ( 13) 00:08:17.840 17442.658 - 17543.483: 98.5413% ( 10) 00:08:17.840 17543.483 - 17644.308: 98.5989% ( 10) 00:08:17.840 17644.308 - 17745.132: 98.6854% ( 15) 00:08:17.840 17745.132 - 17845.957: 98.7315% ( 8) 00:08:17.840 17845.957 - 17946.782: 98.8123% ( 14) 00:08:17.840 17946.782 - 18047.606: 98.8757% ( 11) 00:08:17.840 18047.606 - 18148.431: 98.9333% ( 10) 00:08:17.840 18148.431 - 18249.255: 98.9679% ( 6) 00:08:17.840 18249.255 - 18350.080: 99.0256% ( 10) 00:08:17.840 18350.080 - 18450.905: 99.0717% ( 8) 00:08:17.840 18450.905 - 18551.729: 99.1179% ( 8) 00:08:17.840 18551.729 - 18652.554: 99.1582% ( 7) 00:08:17.840 18652.554 - 18753.378: 99.2043% ( 8) 00:08:17.840 18753.378 - 18854.203: 99.2389% ( 6) 00:08:17.840 18854.203 - 18955.028: 99.2505% ( 2) 00:08:17.840 18955.028 - 19055.852: 99.2620% ( 2) 00:08:17.840 25407.803 - 25508.628: 99.3485% ( 15) 00:08:17.840 25609.452 - 25710.277: 99.3715% ( 4) 00:08:17.840 25710.277 - 25811.102: 99.3946% ( 4) 00:08:17.840 25811.102 - 26012.751: 99.4465% ( 9) 00:08:17.840 26012.751 - 26214.400: 99.5099% ( 11) 00:08:17.840 26214.400 - 26416.049: 99.5618% ( 9) 00:08:17.840 26416.049 - 26617.698: 99.6195% ( 10) 00:08:17.840 26617.698 - 26819.348: 99.6310% ( 2) 00:08:17.840 35490.265 - 35691.914: 99.6771% ( 8) 00:08:17.840 35691.914 - 35893.563: 99.7348% ( 10) 00:08:17.840 35893.563 - 36095.212: 99.7809% ( 8) 00:08:17.840 36095.212 - 36296.862: 99.8386% ( 10) 00:08:17.840 36296.862 - 36498.511: 99.8962% ( 10) 00:08:17.841 36498.511 - 36700.160: 99.9539% ( 10) 00:08:17.841 36700.160 - 36901.809: 100.0000% ( 8) 00:08:17.841 00:08:17.841 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:17.841 ============================================================================== 00:08:17.841 Range in us Cumulative IO count 00:08:17.841 3906.954 - 3932.160: 0.0231% ( 4) 00:08:17.841 3932.160 - 3957.366: 0.0288% ( 1) 00:08:17.841 3957.366 - 3982.572: 0.0461% ( 3) 00:08:17.841 3982.572 - 4007.778: 0.0577% ( 2) 00:08:17.841 4007.778 - 4032.985: 0.0692% ( 2) 00:08:17.841 4032.985 - 4058.191: 0.0865% ( 3) 00:08:17.841 4058.191 - 4083.397: 0.1038% ( 3) 00:08:17.841 4083.397 - 4108.603: 0.1211% ( 3) 00:08:17.841 4108.603 - 4133.809: 0.1326% ( 2) 00:08:17.841 4133.809 - 4159.015: 0.1441% ( 2) 00:08:17.841 4159.015 - 4184.222: 0.1614% ( 3) 00:08:17.841 4184.222 - 4209.428: 0.1730% ( 2) 00:08:17.841 4209.428 - 4234.634: 0.1845% ( 2) 00:08:17.841 4234.634 - 4259.840: 0.2018% ( 3) 00:08:17.841 4259.840 - 4285.046: 0.2133% ( 2) 00:08:17.841 4285.046 - 4310.252: 0.2191% ( 1) 00:08:17.841 4310.252 - 4335.458: 0.2364% ( 3) 00:08:17.841 4335.458 - 4360.665: 0.2479% ( 2) 00:08:17.841 4360.665 - 4385.871: 0.2595% ( 2) 00:08:17.841 4385.871 - 4411.077: 0.2768% ( 3) 00:08:17.841 4411.077 - 4436.283: 0.2883% ( 2) 00:08:17.841 4436.283 - 4461.489: 0.3056% ( 3) 00:08:17.841 4461.489 - 4486.695: 0.3171% ( 2) 00:08:17.841 4486.695 - 4511.902: 0.3286% ( 2) 00:08:17.841 4511.902 - 4537.108: 0.3459% ( 3) 00:08:17.841 4537.108 - 4562.314: 0.3517% ( 1) 00:08:17.841 4562.314 - 4587.520: 0.3690% ( 3) 00:08:17.841 5570.560 - 5595.766: 0.3863% ( 3) 00:08:17.841 5595.766 - 5620.972: 0.4209% ( 6) 00:08:17.841 5620.972 - 5646.178: 0.4497% ( 5) 00:08:17.841 5646.178 - 5671.385: 0.5131% ( 11) 00:08:17.841 5671.385 - 5696.591: 0.5939% ( 14) 00:08:17.841 5696.591 - 5721.797: 0.6688% ( 13) 00:08:17.841 5721.797 - 5747.003: 0.7553% ( 15) 00:08:17.841 5747.003 - 5772.209: 0.8994% ( 25) 00:08:17.841 5772.209 - 5797.415: 1.0436% ( 25) 00:08:17.841 5797.415 - 5822.622: 1.2569% ( 37) 00:08:17.841 5822.622 - 5847.828: 1.4875% ( 40) 00:08:17.841 5847.828 - 5873.034: 1.7989% ( 54) 00:08:17.841 5873.034 - 5898.240: 2.2140% ( 72) 00:08:17.841 5898.240 - 5923.446: 2.8944% ( 118) 00:08:17.841 5923.446 - 5948.652: 3.7304% ( 145) 00:08:17.841 5948.652 - 5973.858: 4.7682% ( 180) 00:08:17.841 5973.858 - 5999.065: 5.9848% ( 211) 00:08:17.841 5999.065 - 6024.271: 7.2417% ( 218) 00:08:17.841 6024.271 - 6049.477: 8.7638% ( 264) 00:08:17.841 6049.477 - 6074.683: 10.4301% ( 289) 00:08:17.841 6074.683 - 6099.889: 12.0791% ( 286) 00:08:17.841 6099.889 - 6125.095: 13.8607% ( 309) 00:08:17.841 6125.095 - 6150.302: 15.7576% ( 329) 00:08:17.841 6150.302 - 6175.508: 17.5565% ( 312) 00:08:17.841 6175.508 - 6200.714: 19.5745% ( 350) 00:08:17.841 6200.714 - 6225.920: 21.5694% ( 346) 00:08:17.841 6225.920 - 6251.126: 23.5240% ( 339) 00:08:17.841 6251.126 - 6276.332: 25.5016% ( 343) 00:08:17.841 6276.332 - 6301.538: 27.4043% ( 330) 00:08:17.841 6301.538 - 6326.745: 29.4857% ( 361) 00:08:17.841 6326.745 - 6351.951: 31.5037% ( 350) 00:08:17.841 6351.951 - 6377.157: 33.5217% ( 350) 00:08:17.841 6377.157 - 6402.363: 35.5570% ( 353) 00:08:17.841 6402.363 - 6427.569: 37.5807% ( 351) 00:08:17.841 6427.569 - 6452.775: 39.5583% ( 343) 00:08:17.841 6452.775 - 6503.188: 43.6693% ( 713) 00:08:17.841 6503.188 - 6553.600: 47.8090% ( 718) 00:08:17.841 6553.600 - 6604.012: 51.9776% ( 723) 00:08:17.841 6604.012 - 6654.425: 56.0713% ( 710) 00:08:17.841 6654.425 - 6704.837: 59.9804% ( 678) 00:08:17.841 6704.837 - 6755.249: 63.6012% ( 628) 00:08:17.841 6755.249 - 6805.662: 66.6398% ( 527) 00:08:17.841 6805.662 - 6856.074: 68.9576% ( 402) 00:08:17.841 6856.074 - 6906.486: 70.6411% ( 292) 00:08:17.841 6906.486 - 6956.898: 71.9211% ( 222) 00:08:17.841 6956.898 - 7007.311: 72.9071% ( 171) 00:08:17.841 7007.311 - 7057.723: 73.7143% ( 140) 00:08:17.841 7057.723 - 7108.135: 74.4292% ( 124) 00:08:17.841 7108.135 - 7158.548: 75.0634% ( 110) 00:08:17.841 7158.548 - 7208.960: 75.7207% ( 114) 00:08:17.841 7208.960 - 7259.372: 76.3030% ( 101) 00:08:17.841 7259.372 - 7309.785: 76.9084% ( 105) 00:08:17.841 7309.785 - 7360.197: 77.4446% ( 93) 00:08:17.841 7360.197 - 7410.609: 77.9578% ( 89) 00:08:17.841 7410.609 - 7461.022: 78.4479% ( 85) 00:08:17.841 7461.022 - 7511.434: 78.8572% ( 71) 00:08:17.841 7511.434 - 7561.846: 79.2551% ( 69) 00:08:17.841 7561.846 - 7612.258: 79.6702% ( 72) 00:08:17.841 7612.258 - 7662.671: 80.0334% ( 63) 00:08:17.841 7662.671 - 7713.083: 80.3390% ( 53) 00:08:17.841 7713.083 - 7763.495: 80.6561% ( 55) 00:08:17.841 7763.495 - 7813.908: 80.9675% ( 54) 00:08:17.841 7813.908 - 7864.320: 81.2673% ( 52) 00:08:17.841 7864.320 - 7914.732: 81.5844% ( 55) 00:08:17.841 7914.732 - 7965.145: 81.8496% ( 46) 00:08:17.842 7965.145 - 8015.557: 82.1264% ( 48) 00:08:17.842 8015.557 - 8065.969: 82.3916% ( 46) 00:08:17.842 8065.969 - 8116.382: 82.6395% ( 43) 00:08:17.842 8116.382 - 8166.794: 82.8471% ( 36) 00:08:17.842 8166.794 - 8217.206: 83.0431% ( 34) 00:08:17.842 8217.206 - 8267.618: 83.1815% ( 24) 00:08:17.842 8267.618 - 8318.031: 83.3314% ( 26) 00:08:17.842 8318.031 - 8368.443: 83.4698% ( 24) 00:08:17.842 8368.443 - 8418.855: 83.5966% ( 22) 00:08:17.842 8418.855 - 8469.268: 83.7811% ( 32) 00:08:17.842 8469.268 - 8519.680: 83.9714% ( 33) 00:08:17.842 8519.680 - 8570.092: 84.2251% ( 44) 00:08:17.842 8570.092 - 8620.505: 84.5018% ( 48) 00:08:17.842 8620.505 - 8670.917: 84.7959% ( 51) 00:08:17.842 8670.917 - 8721.329: 85.1015% ( 53) 00:08:17.842 8721.329 - 8771.742: 85.4474% ( 60) 00:08:17.842 8771.742 - 8822.154: 85.8164% ( 64) 00:08:17.842 8822.154 - 8872.566: 86.1220% ( 53) 00:08:17.842 8872.566 - 8922.978: 86.4161% ( 51) 00:08:17.842 8922.978 - 8973.391: 86.7505% ( 58) 00:08:17.842 8973.391 - 9023.803: 87.1079% ( 62) 00:08:17.842 9023.803 - 9074.215: 87.4654% ( 62) 00:08:17.842 9074.215 - 9124.628: 87.8286% ( 63) 00:08:17.842 9124.628 - 9175.040: 88.2092% ( 66) 00:08:17.842 9175.040 - 9225.452: 88.6070% ( 69) 00:08:17.842 9225.452 - 9275.865: 88.9702% ( 63) 00:08:17.842 9275.865 - 9326.277: 89.3277% ( 62) 00:08:17.842 9326.277 - 9376.689: 89.6506% ( 56) 00:08:17.842 9376.689 - 9427.102: 89.9735% ( 56) 00:08:17.842 9427.102 - 9477.514: 90.2387% ( 46) 00:08:17.842 9477.514 - 9527.926: 90.4924% ( 44) 00:08:17.842 9527.926 - 9578.338: 90.7576% ( 46) 00:08:17.842 9578.338 - 9628.751: 91.0113% ( 44) 00:08:17.842 9628.751 - 9679.163: 91.2477% ( 41) 00:08:17.842 9679.163 - 9729.575: 91.5071% ( 45) 00:08:17.842 9729.575 - 9779.988: 91.7262% ( 38) 00:08:17.842 9779.988 - 9830.400: 91.9165% ( 33) 00:08:17.842 9830.400 - 9880.812: 92.0837% ( 29) 00:08:17.842 9880.812 - 9931.225: 92.2221% ( 24) 00:08:17.842 9931.225 - 9981.637: 92.3432% ( 21) 00:08:17.842 9981.637 - 10032.049: 92.4585% ( 20) 00:08:17.842 10032.049 - 10082.462: 92.5565% ( 17) 00:08:17.842 10082.462 - 10132.874: 92.6603% ( 18) 00:08:17.842 10132.874 - 10183.286: 92.8102% ( 26) 00:08:17.842 10183.286 - 10233.698: 92.9486% ( 24) 00:08:17.842 10233.698 - 10284.111: 93.0927% ( 25) 00:08:17.842 10284.111 - 10334.523: 93.2369% ( 25) 00:08:17.842 10334.523 - 10384.935: 93.3983% ( 28) 00:08:17.842 10384.935 - 10435.348: 93.5136% ( 20) 00:08:17.842 10435.348 - 10485.760: 93.6232% ( 19) 00:08:17.842 10485.760 - 10536.172: 93.7154% ( 16) 00:08:17.842 10536.172 - 10586.585: 93.8019% ( 15) 00:08:17.842 10586.585 - 10636.997: 93.8595% ( 10) 00:08:17.842 10636.997 - 10687.409: 93.9345% ( 13) 00:08:17.842 10687.409 - 10737.822: 93.9922% ( 10) 00:08:17.842 10737.822 - 10788.234: 94.0786% ( 15) 00:08:17.842 10788.234 - 10838.646: 94.1536% ( 13) 00:08:17.842 10838.646 - 10889.058: 94.2574% ( 18) 00:08:17.842 10889.058 - 10939.471: 94.3900% ( 23) 00:08:17.842 10939.471 - 10989.883: 94.4880% ( 17) 00:08:17.842 10989.883 - 11040.295: 94.5687% ( 14) 00:08:17.842 11040.295 - 11090.708: 94.6494% ( 14) 00:08:17.842 11090.708 - 11141.120: 94.7244% ( 13) 00:08:17.842 11141.120 - 11191.532: 94.8051% ( 14) 00:08:17.842 11191.532 - 11241.945: 94.8858% ( 14) 00:08:17.842 11241.945 - 11292.357: 94.9839% ( 17) 00:08:17.842 11292.357 - 11342.769: 95.0934% ( 19) 00:08:17.842 11342.769 - 11393.182: 95.2145% ( 21) 00:08:17.842 11393.182 - 11443.594: 95.3413% ( 22) 00:08:17.842 11443.594 - 11494.006: 95.4509% ( 19) 00:08:17.842 11494.006 - 11544.418: 95.5720% ( 21) 00:08:17.842 11544.418 - 11594.831: 95.6988% ( 22) 00:08:17.842 11594.831 - 11645.243: 95.8083% ( 19) 00:08:17.842 11645.243 - 11695.655: 95.9179% ( 19) 00:08:17.842 11695.655 - 11746.068: 96.0159% ( 17) 00:08:17.842 11746.068 - 11796.480: 96.1024% ( 15) 00:08:17.842 11796.480 - 11846.892: 96.1774% ( 13) 00:08:17.842 11846.892 - 11897.305: 96.2696% ( 16) 00:08:17.842 11897.305 - 11947.717: 96.3446% ( 13) 00:08:17.842 11947.717 - 11998.129: 96.4310% ( 15) 00:08:17.842 11998.129 - 12048.542: 96.5291% ( 17) 00:08:17.842 12048.542 - 12098.954: 96.6098% ( 14) 00:08:17.842 12098.954 - 12149.366: 96.6905% ( 14) 00:08:17.842 12149.366 - 12199.778: 96.7712% ( 14) 00:08:17.842 12199.778 - 12250.191: 96.8519% ( 14) 00:08:17.842 12250.191 - 12300.603: 96.9327% ( 14) 00:08:17.842 12300.603 - 12351.015: 96.9788% ( 8) 00:08:17.842 12351.015 - 12401.428: 97.0249% ( 8) 00:08:17.842 12401.428 - 12451.840: 97.0595% ( 6) 00:08:17.842 12451.840 - 12502.252: 97.0999% ( 7) 00:08:17.842 12502.252 - 12552.665: 97.1345% ( 6) 00:08:17.842 12552.665 - 12603.077: 97.1748% ( 7) 00:08:17.842 12603.077 - 12653.489: 97.2152% ( 7) 00:08:17.842 12653.489 - 12703.902: 97.2440% ( 5) 00:08:17.842 12703.902 - 12754.314: 97.2786% ( 6) 00:08:17.842 12754.314 - 12804.726: 97.3190% ( 7) 00:08:17.842 12804.726 - 12855.138: 97.3593% ( 7) 00:08:17.842 12855.138 - 12905.551: 97.3881% ( 5) 00:08:17.842 12905.551 - 13006.375: 97.4170% ( 5) 00:08:17.842 14115.446 - 14216.271: 97.4573% ( 7) 00:08:17.842 14216.271 - 14317.095: 97.5092% ( 9) 00:08:17.842 14317.095 - 14417.920: 97.5784% ( 12) 00:08:17.842 14417.920 - 14518.745: 97.6303% ( 9) 00:08:17.842 14518.745 - 14619.569: 97.6880% ( 10) 00:08:17.842 14619.569 - 14720.394: 97.7456% ( 10) 00:08:17.842 14720.394 - 14821.218: 97.7975% ( 9) 00:08:17.842 14821.218 - 14922.043: 97.8436% ( 8) 00:08:17.842 14922.043 - 15022.868: 97.8955% ( 9) 00:08:17.842 15022.868 - 15123.692: 97.9474% ( 9) 00:08:17.842 15123.692 - 15224.517: 98.0051% ( 10) 00:08:17.842 15224.517 - 15325.342: 98.0397% ( 6) 00:08:17.842 15325.342 - 15426.166: 98.0627% ( 4) 00:08:17.842 15426.166 - 15526.991: 98.0800% ( 3) 00:08:17.842 15526.991 - 15627.815: 98.1031% ( 4) 00:08:17.842 15627.815 - 15728.640: 98.1146% ( 2) 00:08:17.842 15728.640 - 15829.465: 98.1377% ( 4) 00:08:17.842 15829.465 - 15930.289: 98.1550% ( 3) 00:08:17.842 16636.062 - 16736.886: 98.1780% ( 4) 00:08:17.842 16736.886 - 16837.711: 98.2126% ( 6) 00:08:17.842 16837.711 - 16938.535: 98.2415% ( 5) 00:08:17.842 16938.535 - 17039.360: 98.2761% ( 6) 00:08:17.842 17039.360 - 17140.185: 98.2934% ( 3) 00:08:17.843 17140.185 - 17241.009: 98.3222% ( 5) 00:08:17.843 17241.009 - 17341.834: 98.3741% ( 9) 00:08:17.843 17341.834 - 17442.658: 98.4317% ( 10) 00:08:17.843 17442.658 - 17543.483: 98.4721% ( 7) 00:08:17.843 17543.483 - 17644.308: 98.5298% ( 10) 00:08:17.843 17644.308 - 17745.132: 98.5759% ( 8) 00:08:17.843 17745.132 - 17845.957: 98.6220% ( 8) 00:08:17.843 17845.957 - 17946.782: 98.6739% ( 9) 00:08:17.843 17946.782 - 18047.606: 98.6970% ( 4) 00:08:17.843 18047.606 - 18148.431: 98.7488% ( 9) 00:08:17.843 18148.431 - 18249.255: 98.8065% ( 10) 00:08:17.843 18249.255 - 18350.080: 98.8815% ( 13) 00:08:17.843 18350.080 - 18450.905: 98.9449% ( 11) 00:08:17.843 18450.905 - 18551.729: 99.0025% ( 10) 00:08:17.843 18551.729 - 18652.554: 99.0602% ( 10) 00:08:17.843 18652.554 - 18753.378: 99.1179% ( 10) 00:08:17.843 18753.378 - 18854.203: 99.1755% ( 10) 00:08:17.843 18854.203 - 18955.028: 99.2159% ( 7) 00:08:17.843 18955.028 - 19055.852: 99.2447% ( 5) 00:08:17.843 19055.852 - 19156.677: 99.2620% ( 3) 00:08:17.843 24500.382 - 24601.206: 99.2793% ( 3) 00:08:17.843 24601.206 - 24702.031: 99.3081% ( 5) 00:08:17.843 24702.031 - 24802.855: 99.3369% ( 5) 00:08:17.843 24802.855 - 24903.680: 99.3715% ( 6) 00:08:17.843 24903.680 - 25004.505: 99.4004% ( 5) 00:08:17.843 25004.505 - 25105.329: 99.4350% ( 6) 00:08:17.843 25105.329 - 25206.154: 99.4638% ( 5) 00:08:17.843 25206.154 - 25306.978: 99.4926% ( 5) 00:08:17.843 25306.978 - 25407.803: 99.5272% ( 6) 00:08:17.843 25407.803 - 25508.628: 99.5560% ( 5) 00:08:17.843 25508.628 - 25609.452: 99.5849% ( 5) 00:08:17.843 25609.452 - 25710.277: 99.6137% ( 5) 00:08:17.843 25710.277 - 25811.102: 99.6310% ( 3) 00:08:17.843 35288.615 - 35490.265: 99.6887% ( 10) 00:08:17.843 35490.265 - 35691.914: 99.7521% ( 11) 00:08:17.843 35691.914 - 35893.563: 99.8097% ( 10) 00:08:17.843 35893.563 - 36095.212: 99.8732% ( 11) 00:08:17.843 36095.212 - 36296.862: 99.9250% ( 9) 00:08:17.843 36296.862 - 36498.511: 99.9885% ( 11) 00:08:17.843 36498.511 - 36700.160: 100.0000% ( 2) 00:08:17.843 00:08:17.843 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:17.843 ============================================================================== 00:08:17.843 Range in us Cumulative IO count 00:08:17.843 3402.831 - 3428.037: 0.0231% ( 4) 00:08:17.843 3428.037 - 3453.243: 0.0519% ( 5) 00:08:17.843 3453.243 - 3478.449: 0.0577% ( 1) 00:08:17.843 3478.449 - 3503.655: 0.0692% ( 2) 00:08:17.843 3503.655 - 3528.862: 0.0807% ( 2) 00:08:17.843 3528.862 - 3554.068: 0.0923% ( 2) 00:08:17.843 3554.068 - 3579.274: 0.1095% ( 3) 00:08:17.843 3579.274 - 3604.480: 0.1211% ( 2) 00:08:17.843 3604.480 - 3629.686: 0.1384% ( 3) 00:08:17.843 3629.686 - 3654.892: 0.1499% ( 2) 00:08:17.843 3654.892 - 3680.098: 0.1614% ( 2) 00:08:17.843 3680.098 - 3705.305: 0.1730% ( 2) 00:08:17.843 3705.305 - 3730.511: 0.1903% ( 3) 00:08:17.843 3730.511 - 3755.717: 0.2018% ( 2) 00:08:17.843 3755.717 - 3780.923: 0.2133% ( 2) 00:08:17.843 3780.923 - 3806.129: 0.2249% ( 2) 00:08:17.843 3806.129 - 3831.335: 0.2422% ( 3) 00:08:17.843 3831.335 - 3856.542: 0.2537% ( 2) 00:08:17.843 3856.542 - 3881.748: 0.2710% ( 3) 00:08:17.843 3881.748 - 3906.954: 0.2825% ( 2) 00:08:17.843 3906.954 - 3932.160: 0.2940% ( 2) 00:08:17.843 3932.160 - 3957.366: 0.3113% ( 3) 00:08:17.843 3957.366 - 3982.572: 0.3229% ( 2) 00:08:17.843 3982.572 - 4007.778: 0.3402% ( 3) 00:08:17.843 4007.778 - 4032.985: 0.3517% ( 2) 00:08:17.843 4032.985 - 4058.191: 0.3632% ( 2) 00:08:17.843 4058.191 - 4083.397: 0.3690% ( 1) 00:08:17.843 4990.818 - 5016.025: 0.3978% ( 5) 00:08:17.843 5016.025 - 5041.231: 0.4151% ( 3) 00:08:17.843 5041.231 - 5066.437: 0.4267% ( 2) 00:08:17.843 5066.437 - 5091.643: 0.4382% ( 2) 00:08:17.843 5091.643 - 5116.849: 0.4440% ( 1) 00:08:17.843 5116.849 - 5142.055: 0.4613% ( 3) 00:08:17.843 5142.055 - 5167.262: 0.4728% ( 2) 00:08:17.843 5167.262 - 5192.468: 0.4901% ( 3) 00:08:17.843 5192.468 - 5217.674: 0.5016% ( 2) 00:08:17.843 5217.674 - 5242.880: 0.5189% ( 3) 00:08:17.843 5242.880 - 5268.086: 0.5304% ( 2) 00:08:17.843 5268.086 - 5293.292: 0.5420% ( 2) 00:08:17.843 5293.292 - 5318.498: 0.5593% ( 3) 00:08:17.843 5318.498 - 5343.705: 0.5708% ( 2) 00:08:17.843 5343.705 - 5368.911: 0.5881% ( 3) 00:08:17.843 5368.911 - 5394.117: 0.5996% ( 2) 00:08:17.843 5394.117 - 5419.323: 0.6169% ( 3) 00:08:17.843 5419.323 - 5444.529: 0.6285% ( 2) 00:08:17.843 5444.529 - 5469.735: 0.6400% ( 2) 00:08:17.843 5469.735 - 5494.942: 0.6515% ( 2) 00:08:17.843 5494.942 - 5520.148: 0.6631% ( 2) 00:08:17.843 5520.148 - 5545.354: 0.6804% ( 3) 00:08:17.843 5545.354 - 5570.560: 0.6919% ( 2) 00:08:17.843 5570.560 - 5595.766: 0.7092% ( 3) 00:08:17.843 5595.766 - 5620.972: 0.7207% ( 2) 00:08:17.843 5620.972 - 5646.178: 0.7322% ( 2) 00:08:17.843 5646.178 - 5671.385: 0.7380% ( 1) 00:08:17.843 5671.385 - 5696.591: 0.7841% ( 8) 00:08:17.843 5696.591 - 5721.797: 0.8130% ( 5) 00:08:17.843 5721.797 - 5747.003: 0.8533% ( 7) 00:08:17.843 5747.003 - 5772.209: 0.9225% ( 12) 00:08:17.843 5772.209 - 5797.415: 1.0148% ( 16) 00:08:17.843 5797.415 - 5822.622: 1.1416% ( 22) 00:08:17.843 5822.622 - 5847.828: 1.3203% ( 31) 00:08:17.843 5847.828 - 5873.034: 1.5625% ( 42) 00:08:17.843 5873.034 - 5898.240: 1.8393% ( 48) 00:08:17.843 5898.240 - 5923.446: 2.3293% ( 85) 00:08:17.843 5923.446 - 5948.652: 3.0443% ( 124) 00:08:17.843 5948.652 - 5973.858: 4.0302% ( 171) 00:08:17.843 5973.858 - 5999.065: 5.2756% ( 216) 00:08:17.843 5999.065 - 6024.271: 6.5383% ( 219) 00:08:17.843 6024.271 - 6049.477: 8.1296% ( 276) 00:08:17.843 6049.477 - 6074.683: 9.8305% ( 295) 00:08:17.843 6074.683 - 6099.889: 11.7216% ( 328) 00:08:17.843 6099.889 - 6125.095: 13.4744% ( 304) 00:08:17.843 6125.095 - 6150.302: 15.3310% ( 322) 00:08:17.843 6150.302 - 6175.508: 17.2509% ( 333) 00:08:17.844 6175.508 - 6200.714: 19.1651% ( 332) 00:08:17.844 6200.714 - 6225.920: 21.0909% ( 334) 00:08:17.844 6225.920 - 6251.126: 23.0743% ( 344) 00:08:17.844 6251.126 - 6276.332: 25.0980% ( 351) 00:08:17.844 6276.332 - 6301.538: 27.1102% ( 349) 00:08:17.844 6301.538 - 6326.745: 29.1282% ( 350) 00:08:17.844 6326.745 - 6351.951: 31.2385% ( 366) 00:08:17.844 6351.951 - 6377.157: 33.2795% ( 354) 00:08:17.844 6377.157 - 6402.363: 35.3379% ( 357) 00:08:17.844 6402.363 - 6427.569: 37.4366% ( 364) 00:08:17.844 6427.569 - 6452.775: 39.5007% ( 358) 00:08:17.844 6452.775 - 6503.188: 43.7846% ( 743) 00:08:17.844 6503.188 - 6553.600: 47.9705% ( 726) 00:08:17.844 6553.600 - 6604.012: 52.1910% ( 732) 00:08:17.844 6604.012 - 6654.425: 56.3768% ( 726) 00:08:17.844 6654.425 - 6704.837: 60.5166% ( 718) 00:08:17.844 6704.837 - 6755.249: 64.3335% ( 662) 00:08:17.844 6755.249 - 6805.662: 67.4470% ( 540) 00:08:17.844 6805.662 - 6856.074: 69.6379% ( 380) 00:08:17.844 6856.074 - 6906.486: 71.1889% ( 269) 00:08:17.844 6906.486 - 6956.898: 72.3305% ( 198) 00:08:17.844 6956.898 - 7007.311: 73.2818% ( 165) 00:08:17.844 7007.311 - 7057.723: 74.1409% ( 149) 00:08:17.844 7057.723 - 7108.135: 74.8674% ( 126) 00:08:17.844 7108.135 - 7158.548: 75.4728% ( 105) 00:08:17.844 7158.548 - 7208.960: 76.1012% ( 109) 00:08:17.844 7208.960 - 7259.372: 76.6836% ( 101) 00:08:17.844 7259.372 - 7309.785: 77.2198% ( 93) 00:08:17.844 7309.785 - 7360.197: 77.7156% ( 86) 00:08:17.844 7360.197 - 7410.609: 78.1077% ( 68) 00:08:17.844 7410.609 - 7461.022: 78.5055% ( 69) 00:08:17.844 7461.022 - 7511.434: 78.9380% ( 75) 00:08:17.844 7511.434 - 7561.846: 79.3070% ( 64) 00:08:17.844 7561.846 - 7612.258: 79.6933% ( 67) 00:08:17.844 7612.258 - 7662.671: 80.1315% ( 76) 00:08:17.844 7662.671 - 7713.083: 80.5120% ( 66) 00:08:17.844 7713.083 - 7763.495: 80.8522% ( 59) 00:08:17.844 7763.495 - 7813.908: 81.1750% ( 56) 00:08:17.844 7813.908 - 7864.320: 81.4864% ( 54) 00:08:17.844 7864.320 - 7914.732: 81.7977% ( 54) 00:08:17.844 7914.732 - 7965.145: 82.1091% ( 54) 00:08:17.844 7965.145 - 8015.557: 82.3974% ( 50) 00:08:17.844 8015.557 - 8065.969: 82.7202% ( 56) 00:08:17.844 8065.969 - 8116.382: 83.0374% ( 55) 00:08:17.844 8116.382 - 8166.794: 83.3199% ( 49) 00:08:17.844 8166.794 - 8217.206: 83.6428% ( 56) 00:08:17.844 8217.206 - 8267.618: 83.9137% ( 47) 00:08:17.844 8267.618 - 8318.031: 84.1674% ( 44) 00:08:17.844 8318.031 - 8368.443: 84.3635% ( 34) 00:08:17.844 8368.443 - 8418.855: 84.5999% ( 41) 00:08:17.844 8418.855 - 8469.268: 84.8536% ( 44) 00:08:17.844 8469.268 - 8519.680: 85.1591% ( 53) 00:08:17.844 8519.680 - 8570.092: 85.4589% ( 52) 00:08:17.844 8570.092 - 8620.505: 85.6953% ( 41) 00:08:17.844 8620.505 - 8670.917: 85.9548% ( 45) 00:08:17.844 8670.917 - 8721.329: 86.1854% ( 40) 00:08:17.844 8721.329 - 8771.742: 86.4391% ( 44) 00:08:17.844 8771.742 - 8822.154: 86.6870% ( 43) 00:08:17.844 8822.154 - 8872.566: 86.9407% ( 44) 00:08:17.844 8872.566 - 8922.978: 87.1771% ( 41) 00:08:17.844 8922.978 - 8973.391: 87.3962% ( 38) 00:08:17.844 8973.391 - 9023.803: 87.5750% ( 31) 00:08:17.844 9023.803 - 9074.215: 87.7537% ( 31) 00:08:17.844 9074.215 - 9124.628: 87.9440% ( 33) 00:08:17.844 9124.628 - 9175.040: 88.1342% ( 33) 00:08:17.844 9175.040 - 9225.452: 88.3360% ( 35) 00:08:17.844 9225.452 - 9275.865: 88.5436% ( 36) 00:08:17.844 9275.865 - 9326.277: 88.7512% ( 36) 00:08:17.844 9326.277 - 9376.689: 88.9587% ( 36) 00:08:17.844 9376.689 - 9427.102: 89.1663% ( 36) 00:08:17.844 9427.102 - 9477.514: 89.3565% ( 33) 00:08:17.844 9477.514 - 9527.926: 89.5756% ( 38) 00:08:17.844 9527.926 - 9578.338: 89.8063% ( 40) 00:08:17.844 9578.338 - 9628.751: 90.0484% ( 42) 00:08:17.844 9628.751 - 9679.163: 90.3137% ( 46) 00:08:17.844 9679.163 - 9729.575: 90.5616% ( 43) 00:08:17.844 9729.575 - 9779.988: 90.7864% ( 39) 00:08:17.844 9779.988 - 9830.400: 91.0344% ( 43) 00:08:17.844 9830.400 - 9880.812: 91.2650% ( 40) 00:08:17.844 9880.812 - 9931.225: 91.4899% ( 39) 00:08:17.844 9931.225 - 9981.637: 91.7205% ( 40) 00:08:17.844 9981.637 - 10032.049: 91.9165% ( 34) 00:08:17.844 10032.049 - 10082.462: 92.1414% ( 39) 00:08:17.844 10082.462 - 10132.874: 92.3605% ( 38) 00:08:17.844 10132.874 - 10183.286: 92.5623% ( 35) 00:08:17.844 10183.286 - 10233.698: 92.7410% ( 31) 00:08:17.844 10233.698 - 10284.111: 92.9082% ( 29) 00:08:17.844 10284.111 - 10334.523: 93.0869% ( 31) 00:08:17.844 10334.523 - 10384.935: 93.2599% ( 30) 00:08:17.844 10384.935 - 10435.348: 93.3752% ( 20) 00:08:17.844 10435.348 - 10485.760: 93.4502% ( 13) 00:08:17.844 10485.760 - 10536.172: 93.5251% ( 13) 00:08:17.844 10536.172 - 10586.585: 93.6116% ( 15) 00:08:17.844 10586.585 - 10636.997: 93.6981% ( 15) 00:08:17.844 10636.997 - 10687.409: 93.7961% ( 17) 00:08:17.844 10687.409 - 10737.822: 93.9230% ( 22) 00:08:17.844 10737.822 - 10788.234: 94.0095% ( 15) 00:08:17.844 10788.234 - 10838.646: 94.1017% ( 16) 00:08:17.844 10838.646 - 10889.058: 94.2055% ( 18) 00:08:17.844 10889.058 - 10939.471: 94.3150% ( 19) 00:08:17.844 10939.471 - 10989.883: 94.4419% ( 22) 00:08:17.844 10989.883 - 11040.295: 94.5284% ( 15) 00:08:17.844 11040.295 - 11090.708: 94.6552% ( 22) 00:08:17.844 11090.708 - 11141.120: 94.7936% ( 24) 00:08:17.844 11141.120 - 11191.532: 94.9377% ( 25) 00:08:17.844 11191.532 - 11241.945: 95.0876% ( 26) 00:08:17.844 11241.945 - 11292.357: 95.2433% ( 27) 00:08:17.844 11292.357 - 11342.769: 95.3817% ( 24) 00:08:17.844 11342.769 - 11393.182: 95.5143% ( 23) 00:08:17.844 11393.182 - 11443.594: 95.6469% ( 23) 00:08:17.844 11443.594 - 11494.006: 95.7738% ( 22) 00:08:17.844 11494.006 - 11544.418: 95.8948% ( 21) 00:08:17.844 11544.418 - 11594.831: 96.0390% ( 25) 00:08:17.844 11594.831 - 11645.243: 96.1831% ( 25) 00:08:17.844 11645.243 - 11695.655: 96.2754% ( 16) 00:08:17.844 11695.655 - 11746.068: 96.3792% ( 18) 00:08:17.844 11746.068 - 11796.480: 96.4887% ( 19) 00:08:17.845 11796.480 - 11846.892: 96.5694% ( 14) 00:08:17.845 11846.892 - 11897.305: 96.6559% ( 15) 00:08:17.845 11897.305 - 11947.717: 96.7366% ( 14) 00:08:17.845 11947.717 - 11998.129: 96.8000% ( 11) 00:08:17.845 11998.129 - 12048.542: 96.8808% ( 14) 00:08:17.845 12048.542 - 12098.954: 96.9615% ( 14) 00:08:17.845 12098.954 - 12149.366: 97.0307% ( 12) 00:08:17.845 12149.366 - 12199.778: 97.1114% ( 14) 00:08:17.845 12199.778 - 12250.191: 97.1748% ( 11) 00:08:17.845 12250.191 - 12300.603: 97.2555% ( 14) 00:08:17.845 12300.603 - 12351.015: 97.3132% ( 10) 00:08:17.845 12351.015 - 12401.428: 97.3478% ( 6) 00:08:17.845 12401.428 - 12451.840: 97.3708% ( 4) 00:08:17.845 12451.840 - 12502.252: 97.3824% ( 2) 00:08:17.845 12502.252 - 12552.665: 97.3939% ( 2) 00:08:17.845 12552.665 - 12603.077: 97.4054% ( 2) 00:08:17.845 12603.077 - 12653.489: 97.4112% ( 1) 00:08:17.845 12653.489 - 12703.902: 97.4170% ( 1) 00:08:17.845 13308.849 - 13409.674: 97.4285% ( 2) 00:08:17.845 13409.674 - 13510.498: 97.4804% ( 9) 00:08:17.845 13510.498 - 13611.323: 97.5381% ( 10) 00:08:17.845 13611.323 - 13712.148: 97.5669% ( 5) 00:08:17.845 13712.148 - 13812.972: 97.6015% ( 6) 00:08:17.845 13812.972 - 13913.797: 97.6303% ( 5) 00:08:17.845 13913.797 - 14014.622: 97.6591% ( 5) 00:08:17.845 14014.622 - 14115.446: 97.7110% ( 9) 00:08:17.845 14115.446 - 14216.271: 97.7514% ( 7) 00:08:17.845 14216.271 - 14317.095: 97.7917% ( 7) 00:08:17.845 14317.095 - 14417.920: 97.8321% ( 7) 00:08:17.845 14417.920 - 14518.745: 97.8725% ( 7) 00:08:17.845 14518.745 - 14619.569: 97.9013% ( 5) 00:08:17.845 14619.569 - 14720.394: 97.9244% ( 4) 00:08:17.845 14720.394 - 14821.218: 97.9474% ( 4) 00:08:17.845 14821.218 - 14922.043: 97.9705% ( 4) 00:08:17.845 14922.043 - 15022.868: 97.9935% ( 4) 00:08:17.845 15022.868 - 15123.692: 98.0224% ( 5) 00:08:17.845 15123.692 - 15224.517: 98.0454% ( 4) 00:08:17.845 15224.517 - 15325.342: 98.0743% ( 5) 00:08:17.845 15325.342 - 15426.166: 98.0858% ( 2) 00:08:17.845 15426.166 - 15526.991: 98.1089% ( 4) 00:08:17.845 15526.991 - 15627.815: 98.1319% ( 4) 00:08:17.845 15627.815 - 15728.640: 98.1492% ( 3) 00:08:17.845 15728.640 - 15829.465: 98.1550% ( 1) 00:08:17.845 16434.412 - 16535.237: 98.1723% ( 3) 00:08:17.845 16535.237 - 16636.062: 98.1896% ( 3) 00:08:17.845 16636.062 - 16736.886: 98.2184% ( 5) 00:08:17.845 16736.886 - 16837.711: 98.2645% ( 8) 00:08:17.845 16837.711 - 16938.535: 98.3164% ( 9) 00:08:17.845 16938.535 - 17039.360: 98.3798% ( 11) 00:08:17.845 17039.360 - 17140.185: 98.4375% ( 10) 00:08:17.845 17140.185 - 17241.009: 98.4952% ( 10) 00:08:17.845 17241.009 - 17341.834: 98.5874% ( 16) 00:08:17.845 17341.834 - 17442.658: 98.6451% ( 10) 00:08:17.845 17442.658 - 17543.483: 98.7085% ( 11) 00:08:17.845 17543.483 - 17644.308: 98.7777% ( 12) 00:08:17.845 17644.308 - 17745.132: 98.8353% ( 10) 00:08:17.845 17745.132 - 17845.957: 98.8988% ( 11) 00:08:17.845 17845.957 - 17946.782: 98.9564% ( 10) 00:08:17.845 17946.782 - 18047.606: 99.0025% ( 8) 00:08:17.845 18047.606 - 18148.431: 99.0487% ( 8) 00:08:17.845 18148.431 - 18249.255: 99.0717% ( 4) 00:08:17.845 18249.255 - 18350.080: 99.0948% ( 4) 00:08:17.845 18350.080 - 18450.905: 99.1179% ( 4) 00:08:17.845 18450.905 - 18551.729: 99.1409% ( 4) 00:08:17.845 18551.729 - 18652.554: 99.1640% ( 4) 00:08:17.845 18652.554 - 18753.378: 99.1870% ( 4) 00:08:17.845 18753.378 - 18854.203: 99.2101% ( 4) 00:08:17.845 18854.203 - 18955.028: 99.2332% ( 4) 00:08:17.845 18955.028 - 19055.852: 99.2562% ( 4) 00:08:17.845 19055.852 - 19156.677: 99.2620% ( 1) 00:08:17.845 26214.400 - 26416.049: 99.2851% ( 4) 00:08:17.845 26416.049 - 26617.698: 99.3139% ( 5) 00:08:17.845 26617.698 - 26819.348: 99.3600% ( 8) 00:08:17.845 26819.348 - 27020.997: 99.4234% ( 11) 00:08:17.845 27020.997 - 27222.646: 99.4869% ( 11) 00:08:17.845 27222.646 - 27424.295: 99.5387% ( 9) 00:08:17.845 27424.295 - 27625.945: 99.6022% ( 11) 00:08:17.845 27625.945 - 27827.594: 99.6310% ( 5) 00:08:17.845 35893.563 - 36095.212: 99.6714% ( 7) 00:08:17.845 36095.212 - 36296.862: 99.7175% ( 8) 00:08:17.845 36296.862 - 36498.511: 99.7694% ( 9) 00:08:17.845 36498.511 - 36700.160: 99.8270% ( 10) 00:08:17.845 36700.160 - 36901.809: 99.8905% ( 11) 00:08:17.845 36901.809 - 37103.458: 99.9539% ( 11) 00:08:17.845 37103.458 - 37305.108: 100.0000% ( 8) 00:08:17.845 00:08:17.845 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:17.845 ============================================================================== 00:08:17.845 Range in us Cumulative IO count 00:08:17.845 3213.785 - 3226.388: 0.0288% ( 5) 00:08:17.845 3226.388 - 3251.594: 0.0404% ( 2) 00:08:17.845 3251.594 - 3276.800: 0.0519% ( 2) 00:08:17.845 3276.800 - 3302.006: 0.0577% ( 1) 00:08:17.845 3302.006 - 3327.212: 0.0634% ( 1) 00:08:17.845 3327.212 - 3352.418: 0.0750% ( 2) 00:08:17.845 3352.418 - 3377.625: 0.0865% ( 2) 00:08:17.845 3377.625 - 3402.831: 0.1038% ( 3) 00:08:17.845 3402.831 - 3428.037: 0.1153% ( 2) 00:08:17.845 3428.037 - 3453.243: 0.1268% ( 2) 00:08:17.845 3453.243 - 3478.449: 0.1384% ( 2) 00:08:17.845 3478.449 - 3503.655: 0.1499% ( 2) 00:08:17.845 3503.655 - 3528.862: 0.1614% ( 2) 00:08:17.845 3528.862 - 3554.068: 0.1730% ( 2) 00:08:17.845 3554.068 - 3579.274: 0.1845% ( 2) 00:08:17.845 3579.274 - 3604.480: 0.1960% ( 2) 00:08:17.845 3604.480 - 3629.686: 0.2076% ( 2) 00:08:17.845 3629.686 - 3654.892: 0.2191% ( 2) 00:08:17.845 3654.892 - 3680.098: 0.2364% ( 3) 00:08:17.845 3680.098 - 3705.305: 0.2479% ( 2) 00:08:17.845 3705.305 - 3730.511: 0.2595% ( 2) 00:08:17.845 3730.511 - 3755.717: 0.2710% ( 2) 00:08:17.845 3755.717 - 3780.923: 0.2883% ( 3) 00:08:17.845 3780.923 - 3806.129: 0.2998% ( 2) 00:08:17.845 3806.129 - 3831.335: 0.3113% ( 2) 00:08:17.845 3831.335 - 3856.542: 0.3286% ( 3) 00:08:17.845 3856.542 - 3881.748: 0.3402% ( 2) 00:08:17.845 3881.748 - 3906.954: 0.3517% ( 2) 00:08:17.846 3906.954 - 3932.160: 0.3632% ( 2) 00:08:17.846 3932.160 - 3957.366: 0.3690% ( 1) 00:08:17.846 4738.757 - 4763.963: 0.3805% ( 2) 00:08:17.846 4763.963 - 4789.169: 0.4094% ( 5) 00:08:17.846 4789.169 - 4814.375: 0.4209% ( 2) 00:08:17.846 4814.375 - 4839.582: 0.4267% ( 1) 00:08:17.846 4839.582 - 4864.788: 0.4382% ( 2) 00:08:17.846 4864.788 - 4889.994: 0.4497% ( 2) 00:08:17.846 4889.994 - 4915.200: 0.4670% ( 3) 00:08:17.846 4915.200 - 4940.406: 0.4843% ( 3) 00:08:17.846 4940.406 - 4965.612: 0.5074% ( 4) 00:08:17.846 4965.612 - 4990.818: 0.5189% ( 2) 00:08:17.846 5016.025 - 5041.231: 0.5362% ( 3) 00:08:17.846 5041.231 - 5066.437: 0.5477% ( 2) 00:08:17.846 5066.437 - 5091.643: 0.5535% ( 1) 00:08:17.846 5091.643 - 5116.849: 0.5708% ( 3) 00:08:17.846 5116.849 - 5142.055: 0.5823% ( 2) 00:08:17.846 5142.055 - 5167.262: 0.5939% ( 2) 00:08:17.846 5167.262 - 5192.468: 0.6112% ( 3) 00:08:17.846 5192.468 - 5217.674: 0.6227% ( 2) 00:08:17.846 5217.674 - 5242.880: 0.6400% ( 3) 00:08:17.846 5242.880 - 5268.086: 0.6515% ( 2) 00:08:17.846 5268.086 - 5293.292: 0.6631% ( 2) 00:08:17.846 5293.292 - 5318.498: 0.6804% ( 3) 00:08:17.846 5318.498 - 5343.705: 0.6919% ( 2) 00:08:17.846 5343.705 - 5368.911: 0.7092% ( 3) 00:08:17.846 5368.911 - 5394.117: 0.7207% ( 2) 00:08:17.846 5394.117 - 5419.323: 0.7380% ( 3) 00:08:17.846 5671.385 - 5696.591: 0.7668% ( 5) 00:08:17.846 5696.591 - 5721.797: 0.7957% ( 5) 00:08:17.846 5721.797 - 5747.003: 0.8245% ( 5) 00:08:17.846 5747.003 - 5772.209: 0.8937% ( 12) 00:08:17.846 5772.209 - 5797.415: 0.9917% ( 17) 00:08:17.846 5797.415 - 5822.622: 1.1185% ( 22) 00:08:17.846 5822.622 - 5847.828: 1.3030% ( 32) 00:08:17.846 5847.828 - 5873.034: 1.4991% ( 34) 00:08:17.846 5873.034 - 5898.240: 1.7931% ( 51) 00:08:17.846 5898.240 - 5923.446: 2.2774% ( 84) 00:08:17.846 5923.446 - 5948.652: 2.9982% ( 125) 00:08:17.846 5948.652 - 5973.858: 3.9668% ( 168) 00:08:17.846 5973.858 - 5999.065: 5.1257% ( 201) 00:08:17.846 5999.065 - 6024.271: 6.6248% ( 260) 00:08:17.846 6024.271 - 6049.477: 8.1642% ( 267) 00:08:17.846 6049.477 - 6074.683: 9.8881% ( 299) 00:08:17.846 6074.683 - 6099.889: 11.6813% ( 311) 00:08:17.846 6099.889 - 6125.095: 13.4167% ( 301) 00:08:17.846 6125.095 - 6150.302: 15.2214% ( 313) 00:08:17.846 6150.302 - 6175.508: 17.1414% ( 333) 00:08:17.846 6175.508 - 6200.714: 19.1536% ( 349) 00:08:17.846 6200.714 - 6225.920: 21.1543% ( 347) 00:08:17.846 6225.920 - 6251.126: 23.0685% ( 332) 00:08:17.846 6251.126 - 6276.332: 25.1095% ( 354) 00:08:17.846 6276.332 - 6301.538: 27.1102% ( 347) 00:08:17.846 6301.538 - 6326.745: 29.0936% ( 344) 00:08:17.846 6326.745 - 6351.951: 31.0943% ( 347) 00:08:17.846 6351.951 - 6377.157: 33.1469% ( 356) 00:08:17.846 6377.157 - 6402.363: 35.1995% ( 356) 00:08:17.846 6402.363 - 6427.569: 37.3270% ( 369) 00:08:17.846 6427.569 - 6452.775: 39.4430% ( 367) 00:08:17.846 6452.775 - 6503.188: 43.6347% ( 727) 00:08:17.846 6503.188 - 6553.600: 47.8667% ( 734) 00:08:17.846 6553.600 - 6604.012: 52.0872% ( 732) 00:08:17.846 6604.012 - 6654.425: 56.3192% ( 734) 00:08:17.846 6654.425 - 6704.837: 60.4013% ( 708) 00:08:17.846 6704.837 - 6755.249: 64.1836% ( 656) 00:08:17.846 6755.249 - 6805.662: 67.4008% ( 558) 00:08:17.846 6805.662 - 6856.074: 69.7532% ( 408) 00:08:17.846 6856.074 - 6906.486: 71.2465% ( 259) 00:08:17.846 6906.486 - 6956.898: 72.3074% ( 184) 00:08:17.846 6956.898 - 7007.311: 73.2818% ( 169) 00:08:17.846 7007.311 - 7057.723: 74.1236% ( 146) 00:08:17.846 7057.723 - 7108.135: 74.7117% ( 102) 00:08:17.846 7108.135 - 7158.548: 75.3056% ( 103) 00:08:17.846 7158.548 - 7208.960: 75.9686% ( 115) 00:08:17.846 7208.960 - 7259.372: 76.5106% ( 94) 00:08:17.846 7259.372 - 7309.785: 76.9488% ( 76) 00:08:17.846 7309.785 - 7360.197: 77.4446% ( 86) 00:08:17.846 7360.197 - 7410.609: 77.8771% ( 75) 00:08:17.846 7410.609 - 7461.022: 78.3268% ( 78) 00:08:17.846 7461.022 - 7511.434: 78.7362% ( 71) 00:08:17.846 7511.434 - 7561.846: 79.1628% ( 74) 00:08:17.846 7561.846 - 7612.258: 79.6241% ( 80) 00:08:17.846 7612.258 - 7662.671: 80.0507% ( 74) 00:08:17.846 7662.671 - 7713.083: 80.4428% ( 68) 00:08:17.846 7713.083 - 7763.495: 80.8522% ( 71) 00:08:17.846 7763.495 - 7813.908: 81.2212% ( 64) 00:08:17.846 7813.908 - 7864.320: 81.6421% ( 73) 00:08:17.846 7864.320 - 7914.732: 82.0284% ( 67) 00:08:17.846 7914.732 - 7965.145: 82.4031% ( 65) 00:08:17.846 7965.145 - 8015.557: 82.7375% ( 58) 00:08:17.846 8015.557 - 8065.969: 83.0662% ( 57) 00:08:17.846 8065.969 - 8116.382: 83.4121% ( 60) 00:08:17.846 8116.382 - 8166.794: 83.6774% ( 46) 00:08:17.846 8166.794 - 8217.206: 83.9137% ( 41) 00:08:17.846 8217.206 - 8267.618: 84.1559% ( 42) 00:08:17.846 8267.618 - 8318.031: 84.3635% ( 36) 00:08:17.846 8318.031 - 8368.443: 84.5883% ( 39) 00:08:17.846 8368.443 - 8418.855: 84.8305% ( 42) 00:08:17.846 8418.855 - 8469.268: 85.0726% ( 42) 00:08:17.846 8469.268 - 8519.680: 85.3436% ( 47) 00:08:17.846 8519.680 - 8570.092: 85.5916% ( 43) 00:08:17.846 8570.092 - 8620.505: 85.8280% ( 41) 00:08:17.846 8620.505 - 8670.917: 86.0643% ( 41) 00:08:17.846 8670.917 - 8721.329: 86.3123% ( 43) 00:08:17.846 8721.329 - 8771.742: 86.5544% ( 42) 00:08:17.846 8771.742 - 8822.154: 86.7966% ( 42) 00:08:17.846 8822.154 - 8872.566: 87.0330% ( 41) 00:08:17.846 8872.566 - 8922.978: 87.2521% ( 38) 00:08:17.846 8922.978 - 8973.391: 87.4596% ( 36) 00:08:17.846 8973.391 - 9023.803: 87.6557% ( 34) 00:08:17.846 9023.803 - 9074.215: 87.8748% ( 38) 00:08:17.846 9074.215 - 9124.628: 88.0650% ( 33) 00:08:17.846 9124.628 - 9175.040: 88.2611% ( 34) 00:08:17.846 9175.040 - 9225.452: 88.4571% ( 34) 00:08:17.846 9225.452 - 9275.865: 88.6070% ( 26) 00:08:17.846 9275.865 - 9326.277: 88.7512% ( 25) 00:08:17.846 9326.277 - 9376.689: 88.9011% ( 26) 00:08:17.846 9376.689 - 9427.102: 89.0337% ( 23) 00:08:17.846 9427.102 - 9477.514: 89.1720% ( 24) 00:08:17.846 9477.514 - 9527.926: 89.3450% ( 30) 00:08:17.846 9527.926 - 9578.338: 89.5353% ( 33) 00:08:17.847 9578.338 - 9628.751: 89.7083% ( 30) 00:08:17.847 9628.751 - 9679.163: 89.8928% ( 32) 00:08:17.847 9679.163 - 9729.575: 90.0888% ( 34) 00:08:17.847 9729.575 - 9779.988: 90.2675% ( 31) 00:08:17.847 9779.988 - 9830.400: 90.4693% ( 35) 00:08:17.847 9830.400 - 9880.812: 90.6769% ( 36) 00:08:17.847 9880.812 - 9931.225: 90.8902% ( 37) 00:08:17.847 9931.225 - 9981.637: 91.1208% ( 40) 00:08:17.847 9981.637 - 10032.049: 91.3457% ( 39) 00:08:17.847 10032.049 - 10082.462: 91.5648% ( 38) 00:08:17.847 10082.462 - 10132.874: 91.8185% ( 44) 00:08:17.847 10132.874 - 10183.286: 92.1068% ( 50) 00:08:17.847 10183.286 - 10233.698: 92.3720% ( 46) 00:08:17.847 10233.698 - 10284.111: 92.6430% ( 47) 00:08:17.847 10284.111 - 10334.523: 92.8794% ( 41) 00:08:17.847 10334.523 - 10384.935: 93.1331% ( 44) 00:08:17.847 10384.935 - 10435.348: 93.3464% ( 37) 00:08:17.847 10435.348 - 10485.760: 93.5482% ( 35) 00:08:17.847 10485.760 - 10536.172: 93.7558% ( 36) 00:08:17.847 10536.172 - 10586.585: 94.0095% ( 44) 00:08:17.847 10586.585 - 10636.997: 94.2286% ( 38) 00:08:17.847 10636.997 - 10687.409: 94.4419% ( 37) 00:08:17.847 10687.409 - 10737.822: 94.6264% ( 32) 00:08:17.847 10737.822 - 10788.234: 94.7994% ( 30) 00:08:17.847 10788.234 - 10838.646: 94.9608% ( 28) 00:08:17.847 10838.646 - 10889.058: 95.1222% ( 28) 00:08:17.847 10889.058 - 10939.471: 95.2433% ( 21) 00:08:17.847 10939.471 - 10989.883: 95.3529% ( 19) 00:08:17.847 10989.883 - 11040.295: 95.4739% ( 21) 00:08:17.847 11040.295 - 11090.708: 95.5777% ( 18) 00:08:17.847 11090.708 - 11141.120: 95.6642% ( 15) 00:08:17.847 11141.120 - 11191.532: 95.7276% ( 11) 00:08:17.847 11191.532 - 11241.945: 95.8083% ( 14) 00:08:17.847 11241.945 - 11292.357: 95.8718% ( 11) 00:08:17.847 11292.357 - 11342.769: 95.9237% ( 9) 00:08:17.847 11342.769 - 11393.182: 95.9640% ( 7) 00:08:17.847 11393.182 - 11443.594: 95.9929% ( 5) 00:08:17.847 11443.594 - 11494.006: 96.0505% ( 10) 00:08:17.847 11494.006 - 11544.418: 96.1197% ( 12) 00:08:17.847 11544.418 - 11594.831: 96.1831% ( 11) 00:08:17.847 11594.831 - 11645.243: 96.2465% ( 11) 00:08:17.847 11645.243 - 11695.655: 96.2984% ( 9) 00:08:17.847 11695.655 - 11746.068: 96.3619% ( 11) 00:08:17.847 11746.068 - 11796.480: 96.4137% ( 9) 00:08:17.847 11796.480 - 11846.892: 96.4772% ( 11) 00:08:17.847 11846.892 - 11897.305: 96.5521% ( 13) 00:08:17.847 11897.305 - 11947.717: 96.6386% ( 15) 00:08:17.847 11947.717 - 11998.129: 96.7078% ( 12) 00:08:17.847 11998.129 - 12048.542: 96.7770% ( 12) 00:08:17.847 12048.542 - 12098.954: 96.8462% ( 12) 00:08:17.847 12098.954 - 12149.366: 96.8865% ( 7) 00:08:17.847 12149.366 - 12199.778: 96.9384% ( 9) 00:08:17.847 12199.778 - 12250.191: 96.9961% ( 10) 00:08:17.847 12250.191 - 12300.603: 97.0653% ( 12) 00:08:17.847 12300.603 - 12351.015: 97.1056% ( 7) 00:08:17.847 12351.015 - 12401.428: 97.1518% ( 8) 00:08:17.847 12401.428 - 12451.840: 97.1921% ( 7) 00:08:17.847 12451.840 - 12502.252: 97.2382% ( 8) 00:08:17.847 12502.252 - 12552.665: 97.2844% ( 8) 00:08:17.847 12552.665 - 12603.077: 97.3247% ( 7) 00:08:17.847 12603.077 - 12653.489: 97.3708% ( 8) 00:08:17.847 12653.489 - 12703.902: 97.4112% ( 7) 00:08:17.847 12703.902 - 12754.314: 97.4170% ( 1) 00:08:17.847 13308.849 - 13409.674: 97.4285% ( 2) 00:08:17.847 13409.674 - 13510.498: 97.4631% ( 6) 00:08:17.847 13510.498 - 13611.323: 97.4919% ( 5) 00:08:17.847 13611.323 - 13712.148: 97.5265% ( 6) 00:08:17.847 13712.148 - 13812.972: 97.5554% ( 5) 00:08:17.847 13812.972 - 13913.797: 97.5842% ( 5) 00:08:17.847 13913.797 - 14014.622: 97.6534% ( 12) 00:08:17.847 14014.622 - 14115.446: 97.7110% ( 10) 00:08:17.847 14115.446 - 14216.271: 97.7629% ( 9) 00:08:17.847 14216.271 - 14317.095: 97.8263% ( 11) 00:08:17.847 14317.095 - 14417.920: 97.8725% ( 8) 00:08:17.847 14417.920 - 14518.745: 97.9186% ( 8) 00:08:17.847 14518.745 - 14619.569: 97.9474% ( 5) 00:08:17.847 14619.569 - 14720.394: 97.9705% ( 4) 00:08:17.847 14720.394 - 14821.218: 97.9993% ( 5) 00:08:17.847 14821.218 - 14922.043: 98.0166% ( 3) 00:08:17.847 14922.043 - 15022.868: 98.0454% ( 5) 00:08:17.847 15022.868 - 15123.692: 98.0685% ( 4) 00:08:17.847 15123.692 - 15224.517: 98.0916% ( 4) 00:08:17.847 15224.517 - 15325.342: 98.1146% ( 4) 00:08:17.847 15325.342 - 15426.166: 98.1377% ( 4) 00:08:17.847 15426.166 - 15526.991: 98.1550% ( 3) 00:08:17.847 15829.465 - 15930.289: 98.1780% ( 4) 00:08:17.847 15930.289 - 16031.114: 98.2184% ( 7) 00:08:17.847 16031.114 - 16131.938: 98.2472% ( 5) 00:08:17.847 16131.938 - 16232.763: 98.2761% ( 5) 00:08:17.847 16232.763 - 16333.588: 98.3107% ( 6) 00:08:17.847 16333.588 - 16434.412: 98.3452% ( 6) 00:08:17.847 16434.412 - 16535.237: 98.3683% ( 4) 00:08:17.847 16535.237 - 16636.062: 98.3914% ( 4) 00:08:17.847 16636.062 - 16736.886: 98.4202% ( 5) 00:08:17.847 16736.886 - 16837.711: 98.4490% ( 5) 00:08:17.847 16837.711 - 16938.535: 98.4721% ( 4) 00:08:17.847 16938.535 - 17039.360: 98.5240% ( 9) 00:08:17.847 17039.360 - 17140.185: 98.5816% ( 10) 00:08:17.847 17140.185 - 17241.009: 98.6220% ( 7) 00:08:17.847 17241.009 - 17341.834: 98.6566% ( 6) 00:08:17.847 17341.834 - 17442.658: 98.6854% ( 5) 00:08:17.847 17442.658 - 17543.483: 98.7373% ( 9) 00:08:17.847 17543.483 - 17644.308: 98.7834% ( 8) 00:08:17.847 17644.308 - 17745.132: 98.8411% ( 10) 00:08:17.847 17745.132 - 17845.957: 98.8988% ( 10) 00:08:17.847 17845.957 - 17946.782: 98.9564% ( 10) 00:08:17.847 17946.782 - 18047.606: 99.0198% ( 11) 00:08:17.847 18047.606 - 18148.431: 99.0544% ( 6) 00:08:17.847 18148.431 - 18249.255: 99.0775% ( 4) 00:08:17.847 18249.255 - 18350.080: 99.1063% ( 5) 00:08:17.847 18350.080 - 18450.905: 99.1236% ( 3) 00:08:17.847 18450.905 - 18551.729: 99.1351% ( 2) 00:08:17.847 18551.729 - 18652.554: 99.1640% ( 5) 00:08:17.848 18652.554 - 18753.378: 99.1813% ( 3) 00:08:17.848 18753.378 - 18854.203: 99.2043% ( 4) 00:08:17.848 18854.203 - 18955.028: 99.2332% ( 5) 00:08:17.848 18955.028 - 19055.852: 99.2505% ( 3) 00:08:17.848 19055.852 - 19156.677: 99.2620% ( 2) 00:08:17.848 26214.400 - 26416.049: 99.2851% ( 4) 00:08:17.848 26416.049 - 26617.698: 99.3485% ( 11) 00:08:17.848 26617.698 - 26819.348: 99.4061% ( 10) 00:08:17.848 26819.348 - 27020.997: 99.4696% ( 11) 00:08:17.848 27020.997 - 27222.646: 99.5330% ( 11) 00:08:17.848 27222.646 - 27424.295: 99.5964% ( 11) 00:08:17.848 27424.295 - 27625.945: 99.6310% ( 6) 00:08:17.848 35490.265 - 35691.914: 99.6541% ( 4) 00:08:17.848 35691.914 - 35893.563: 99.7117% ( 10) 00:08:17.848 35893.563 - 36095.212: 99.7636% ( 9) 00:08:17.848 36095.212 - 36296.862: 99.8270% ( 11) 00:08:17.848 36296.862 - 36498.511: 99.8905% ( 11) 00:08:17.848 36498.511 - 36700.160: 99.9539% ( 11) 00:08:17.848 36700.160 - 36901.809: 100.0000% ( 8) 00:08:17.848 00:08:17.848 06:42:10 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:19.227 Initializing NVMe Controllers 00:08:19.227 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:19.227 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:19.227 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:19.227 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:19.227 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:19.227 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:19.227 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:19.227 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:19.227 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:19.227 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:19.227 Initialization complete. Launching workers. 00:08:19.227 ======================================================== 00:08:19.227 Latency(us) 00:08:19.227 Device Information : IOPS MiB/s Average min max 00:08:19.227 PCIE (0000:00:11.0) NSID 1 from core 0: 18438.16 216.07 6943.84 5130.48 22125.38 00:08:19.227 PCIE (0000:00:13.0) NSID 1 from core 0: 18438.16 216.07 6938.34 4922.13 21552.74 00:08:19.227 PCIE (0000:00:10.0) NSID 1 from core 0: 18438.16 216.07 6931.75 4464.84 20396.97 00:08:19.227 PCIE (0000:00:12.0) NSID 1 from core 0: 18438.16 216.07 6925.33 4289.90 19882.23 00:08:19.227 PCIE (0000:00:12.0) NSID 2 from core 0: 18438.16 216.07 6919.17 3474.92 19625.57 00:08:19.227 PCIE (0000:00:12.0) NSID 3 from core 0: 18438.16 216.07 6913.12 3033.21 19225.81 00:08:19.227 ======================================================== 00:08:19.227 Total : 110628.96 1296.43 6928.59 3033.21 22125.38 00:08:19.227 00:08:19.227 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:19.227 ================================================================================= 00:08:19.227 1.00000% : 5822.622us 00:08:19.227 10.00000% : 6301.538us 00:08:19.227 25.00000% : 6452.775us 00:08:19.227 50.00000% : 6654.425us 00:08:19.227 75.00000% : 6906.486us 00:08:19.227 90.00000% : 7813.908us 00:08:19.227 95.00000% : 8822.154us 00:08:19.227 98.00000% : 10737.822us 00:08:19.227 99.00000% : 14216.271us 00:08:19.227 99.50000% : 16434.412us 00:08:19.227 99.90000% : 21878.942us 00:08:19.227 99.99000% : 22181.415us 00:08:19.227 99.99900% : 22181.415us 00:08:19.227 99.99990% : 22181.415us 00:08:19.227 99.99999% : 22181.415us 00:08:19.227 00:08:19.227 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:19.227 ================================================================================= 00:08:19.227 1.00000% : 5822.622us 00:08:19.227 10.00000% : 6276.332us 00:08:19.227 25.00000% : 6452.775us 00:08:19.227 50.00000% : 6654.425us 00:08:19.227 75.00000% : 6956.898us 00:08:19.227 90.00000% : 7864.320us 00:08:19.227 95.00000% : 8771.742us 00:08:19.227 98.00000% : 10384.935us 00:08:19.227 99.00000% : 13409.674us 00:08:19.227 99.50000% : 16636.062us 00:08:19.227 99.90000% : 21374.818us 00:08:19.227 99.99000% : 21576.468us 00:08:19.227 99.99900% : 21576.468us 00:08:19.227 99.99990% : 21576.468us 00:08:19.227 99.99999% : 21576.468us 00:08:19.227 00:08:19.227 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:19.227 ================================================================================= 00:08:19.227 1.00000% : 5772.209us 00:08:19.227 10.00000% : 6200.714us 00:08:19.227 25.00000% : 6377.157us 00:08:19.227 50.00000% : 6654.425us 00:08:19.227 75.00000% : 7057.723us 00:08:19.227 90.00000% : 7864.320us 00:08:19.227 95.00000% : 8771.742us 00:08:19.227 98.00000% : 10233.698us 00:08:19.227 99.00000% : 12552.665us 00:08:19.227 99.50000% : 16938.535us 00:08:19.227 99.90000% : 20366.572us 00:08:19.227 99.99000% : 20467.397us 00:08:19.227 99.99900% : 20467.397us 00:08:19.227 99.99990% : 20467.397us 00:08:19.227 99.99999% : 20467.397us 00:08:19.227 00:08:19.227 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:19.227 ================================================================================= 00:08:19.227 1.00000% : 5847.828us 00:08:19.227 10.00000% : 6276.332us 00:08:19.227 25.00000% : 6452.775us 00:08:19.227 50.00000% : 6654.425us 00:08:19.227 75.00000% : 6956.898us 00:08:19.227 90.00000% : 7864.320us 00:08:19.227 95.00000% : 8822.154us 00:08:19.227 98.00000% : 9981.637us 00:08:19.227 99.00000% : 12603.077us 00:08:19.227 99.50000% : 16333.588us 00:08:19.227 99.90000% : 19660.800us 00:08:19.227 99.99000% : 19963.274us 00:08:19.227 99.99900% : 19963.274us 00:08:19.227 99.99990% : 19963.274us 00:08:19.227 99.99999% : 19963.274us 00:08:19.227 00:08:19.227 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:19.227 ================================================================================= 00:08:19.227 1.00000% : 5822.622us 00:08:19.227 10.00000% : 6276.332us 00:08:19.227 25.00000% : 6452.775us 00:08:19.227 50.00000% : 6654.425us 00:08:19.227 75.00000% : 6906.486us 00:08:19.227 90.00000% : 7864.320us 00:08:19.227 95.00000% : 8670.917us 00:08:19.228 98.00000% : 10536.172us 00:08:19.228 99.00000% : 12754.314us 00:08:19.228 99.50000% : 16232.763us 00:08:19.228 99.90000% : 19156.677us 00:08:19.228 99.99000% : 19660.800us 00:08:19.228 99.99900% : 19660.800us 00:08:19.228 99.99990% : 19660.800us 00:08:19.228 99.99999% : 19660.800us 00:08:19.228 00:08:19.228 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:19.228 ================================================================================= 00:08:19.228 1.00000% : 5772.209us 00:08:19.228 10.00000% : 6276.332us 00:08:19.228 25.00000% : 6452.775us 00:08:19.228 50.00000% : 6654.425us 00:08:19.228 75.00000% : 6906.486us 00:08:19.228 90.00000% : 7864.320us 00:08:19.228 95.00000% : 8670.917us 00:08:19.228 98.00000% : 10687.409us 00:08:19.228 99.00000% : 13510.498us 00:08:19.228 99.50000% : 15526.991us 00:08:19.228 99.90000% : 18753.378us 00:08:19.228 99.99000% : 19257.502us 00:08:19.228 99.99900% : 19257.502us 00:08:19.228 99.99990% : 19257.502us 00:08:19.228 99.99999% : 19257.502us 00:08:19.228 00:08:19.228 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:19.228 ============================================================================== 00:08:19.228 Range in us Cumulative IO count 00:08:19.228 5116.849 - 5142.055: 0.0054% ( 1) 00:08:19.228 5142.055 - 5167.262: 0.0216% ( 3) 00:08:19.228 5167.262 - 5192.468: 0.0324% ( 2) 00:08:19.228 5192.468 - 5217.674: 0.0595% ( 5) 00:08:19.228 5217.674 - 5242.880: 0.1298% ( 13) 00:08:19.228 5242.880 - 5268.086: 0.2054% ( 14) 00:08:19.228 5268.086 - 5293.292: 0.2217% ( 3) 00:08:19.228 5293.292 - 5318.498: 0.2325% ( 2) 00:08:19.228 5318.498 - 5343.705: 0.2487% ( 3) 00:08:19.228 5343.705 - 5368.911: 0.2595% ( 2) 00:08:19.228 5368.911 - 5394.117: 0.2703% ( 2) 00:08:19.228 5394.117 - 5419.323: 0.2865% ( 3) 00:08:19.228 5419.323 - 5444.529: 0.3028% ( 3) 00:08:19.228 5444.529 - 5469.735: 0.3190% ( 3) 00:08:19.228 5469.735 - 5494.942: 0.3298% ( 2) 00:08:19.228 5494.942 - 5520.148: 0.3460% ( 3) 00:08:19.228 5520.148 - 5545.354: 0.3622% ( 3) 00:08:19.228 5570.560 - 5595.766: 0.3676% ( 1) 00:08:19.228 5595.766 - 5620.972: 0.3893% ( 4) 00:08:19.228 5620.972 - 5646.178: 0.4163% ( 5) 00:08:19.228 5646.178 - 5671.385: 0.4596% ( 8) 00:08:19.228 5671.385 - 5696.591: 0.5244% ( 12) 00:08:19.228 5696.591 - 5721.797: 0.5893% ( 12) 00:08:19.228 5721.797 - 5747.003: 0.6596% ( 13) 00:08:19.228 5747.003 - 5772.209: 0.7353% ( 14) 00:08:19.228 5772.209 - 5797.415: 0.8272% ( 17) 00:08:19.228 5797.415 - 5822.622: 1.0002% ( 32) 00:08:19.228 5822.622 - 5847.828: 1.2111% ( 39) 00:08:19.228 5847.828 - 5873.034: 1.4003% ( 35) 00:08:19.228 5873.034 - 5898.240: 1.5409% ( 26) 00:08:19.228 5898.240 - 5923.446: 1.8545% ( 58) 00:08:19.228 5923.446 - 5948.652: 2.0545% ( 37) 00:08:19.228 5948.652 - 5973.858: 2.2275% ( 32) 00:08:19.228 5973.858 - 5999.065: 2.6925% ( 86) 00:08:19.228 5999.065 - 6024.271: 2.9141% ( 41) 00:08:19.228 6024.271 - 6049.477: 3.1737% ( 48) 00:08:19.228 6049.477 - 6074.683: 3.5900% ( 77) 00:08:19.228 6074.683 - 6099.889: 4.0657% ( 88) 00:08:19.228 6099.889 - 6125.095: 4.5794% ( 95) 00:08:19.228 6125.095 - 6150.302: 5.0822% ( 93) 00:08:19.228 6150.302 - 6175.508: 5.6066% ( 97) 00:08:19.228 6175.508 - 6200.714: 6.3581% ( 139) 00:08:19.228 6200.714 - 6225.920: 7.3097% ( 176) 00:08:19.228 6225.920 - 6251.126: 8.5640% ( 232) 00:08:19.228 6251.126 - 6276.332: 9.8508% ( 238) 00:08:19.228 6276.332 - 6301.538: 11.6566% ( 334) 00:08:19.228 6301.538 - 6326.745: 13.4191% ( 326) 00:08:19.228 6326.745 - 6351.951: 15.5169% ( 388) 00:08:19.228 6351.951 - 6377.157: 17.5551% ( 377) 00:08:19.228 6377.157 - 6402.363: 20.3936% ( 525) 00:08:19.228 6402.363 - 6427.569: 23.5835% ( 590) 00:08:19.228 6427.569 - 6452.775: 26.3895% ( 519) 00:08:19.228 6452.775 - 6503.188: 33.3532% ( 1288) 00:08:19.228 6503.188 - 6553.600: 41.3062% ( 1471) 00:08:19.228 6553.600 - 6604.012: 49.7135% ( 1555) 00:08:19.228 6604.012 - 6654.425: 56.1202% ( 1185) 00:08:19.228 6654.425 - 6704.837: 61.6566% ( 1024) 00:08:19.228 6704.837 - 6755.249: 65.6683% ( 742) 00:08:19.228 6755.249 - 6805.662: 69.4150% ( 693) 00:08:19.228 6805.662 - 6856.074: 72.4481% ( 561) 00:08:19.228 6856.074 - 6906.486: 75.2865% ( 525) 00:08:19.228 6906.486 - 6956.898: 77.1897% ( 352) 00:08:19.228 6956.898 - 7007.311: 78.7468% ( 288) 00:08:19.228 7007.311 - 7057.723: 80.1417% ( 258) 00:08:19.228 7057.723 - 7108.135: 81.2554% ( 206) 00:08:19.228 7108.135 - 7158.548: 82.2016% ( 175) 00:08:19.228 7158.548 - 7208.960: 82.7692% ( 105) 00:08:19.228 7208.960 - 7259.372: 83.6019% ( 154) 00:08:19.228 7259.372 - 7309.785: 84.0722% ( 87) 00:08:19.228 7309.785 - 7360.197: 84.6832% ( 113) 00:08:19.228 7360.197 - 7410.609: 85.3644% ( 126) 00:08:19.228 7410.609 - 7461.022: 86.0078% ( 119) 00:08:19.228 7461.022 - 7511.434: 87.0837% ( 199) 00:08:19.228 7511.434 - 7561.846: 87.5595% ( 88) 00:08:19.228 7561.846 - 7612.258: 88.0947% ( 99) 00:08:19.228 7612.258 - 7662.671: 88.6840% ( 109) 00:08:19.228 7662.671 - 7713.083: 89.0787% ( 73) 00:08:19.228 7713.083 - 7763.495: 89.5815% ( 93) 00:08:19.228 7763.495 - 7813.908: 90.1168% ( 99) 00:08:19.228 7813.908 - 7864.320: 90.4628% ( 64) 00:08:19.228 7864.320 - 7914.732: 90.9115% ( 83) 00:08:19.228 7914.732 - 7965.145: 91.2522% ( 63) 00:08:19.228 7965.145 - 8015.557: 91.7117% ( 85) 00:08:19.228 8015.557 - 8065.969: 92.2578% ( 101) 00:08:19.228 8065.969 - 8116.382: 92.6471% ( 72) 00:08:19.228 8116.382 - 8166.794: 92.8201% ( 32) 00:08:19.228 8166.794 - 8217.206: 92.9660% ( 27) 00:08:19.228 8217.206 - 8267.618: 93.1012% ( 25) 00:08:19.228 8267.618 - 8318.031: 93.2688% ( 31) 00:08:19.228 8318.031 - 8368.443: 93.4743% ( 38) 00:08:19.228 8368.443 - 8418.855: 93.7608% ( 53) 00:08:19.228 8418.855 - 8469.268: 93.9176% ( 29) 00:08:19.228 8469.268 - 8519.680: 94.0474% ( 24) 00:08:19.228 8519.680 - 8570.092: 94.1555% ( 20) 00:08:19.228 8570.092 - 8620.505: 94.2690% ( 21) 00:08:19.228 8620.505 - 8670.917: 94.3826% ( 21) 00:08:19.228 8670.917 - 8721.329: 94.6259% ( 45) 00:08:19.228 8721.329 - 8771.742: 94.9232% ( 55) 00:08:19.228 8771.742 - 8822.154: 95.2098% ( 53) 00:08:19.228 8822.154 - 8872.566: 95.4098% ( 37) 00:08:19.228 8872.566 - 8922.978: 95.6639% ( 47) 00:08:19.228 8922.978 - 8973.391: 95.8261% ( 30) 00:08:19.228 8973.391 - 9023.803: 96.0099% ( 34) 00:08:19.228 9023.803 - 9074.215: 96.2965% ( 53) 00:08:19.228 9074.215 - 9124.628: 96.3614% ( 12) 00:08:19.228 9124.628 - 9175.040: 96.4100% ( 9) 00:08:19.228 9175.040 - 9225.452: 96.4533% ( 8) 00:08:19.228 9225.452 - 9275.865: 96.4857% ( 6) 00:08:19.228 9275.865 - 9326.277: 96.5236% ( 7) 00:08:19.228 9326.277 - 9376.689: 96.5830% ( 11) 00:08:19.228 9376.689 - 9427.102: 96.7290% ( 27) 00:08:19.228 9427.102 - 9477.514: 96.8317% ( 19) 00:08:19.228 9477.514 - 9527.926: 96.8480% ( 3) 00:08:19.228 9527.926 - 9578.338: 96.9128% ( 12) 00:08:19.228 9578.338 - 9628.751: 96.9831% ( 13) 00:08:19.228 9628.751 - 9679.163: 97.0372% ( 10) 00:08:19.228 9679.163 - 9729.575: 97.0859% ( 9) 00:08:19.228 9729.575 - 9779.988: 97.1453% ( 11) 00:08:19.228 9779.988 - 9830.400: 97.1940% ( 9) 00:08:19.228 9830.400 - 9880.812: 97.2589% ( 12) 00:08:19.228 9880.812 - 9931.225: 97.3778% ( 22) 00:08:19.228 9931.225 - 9981.637: 97.4481% ( 13) 00:08:19.228 9981.637 - 10032.049: 97.4859% ( 7) 00:08:19.228 10032.049 - 10082.462: 97.5238% ( 7) 00:08:19.228 10082.462 - 10132.874: 97.5616% ( 7) 00:08:19.228 10132.874 - 10183.286: 97.5887% ( 5) 00:08:19.228 10183.286 - 10233.698: 97.6211% ( 6) 00:08:19.228 10233.698 - 10284.111: 97.6590% ( 7) 00:08:19.228 10284.111 - 10334.523: 97.6914% ( 6) 00:08:19.228 10334.523 - 10384.935: 97.7238% ( 6) 00:08:19.228 10384.935 - 10435.348: 97.7563% ( 6) 00:08:19.228 10435.348 - 10485.760: 97.7941% ( 7) 00:08:19.228 10485.760 - 10536.172: 97.8428% ( 9) 00:08:19.228 10536.172 - 10586.585: 97.9022% ( 11) 00:08:19.228 10586.585 - 10636.997: 97.9509% ( 9) 00:08:19.228 10636.997 - 10687.409: 97.9996% ( 9) 00:08:19.228 10687.409 - 10737.822: 98.0807% ( 15) 00:08:19.228 10737.822 - 10788.234: 98.1780% ( 18) 00:08:19.228 10788.234 - 10838.646: 98.2591% ( 15) 00:08:19.228 10838.646 - 10889.058: 98.3131% ( 10) 00:08:19.228 10889.058 - 10939.471: 98.3780% ( 12) 00:08:19.228 10939.471 - 10989.883: 98.4159% ( 7) 00:08:19.228 10989.883 - 11040.295: 98.4483% ( 6) 00:08:19.228 11040.295 - 11090.708: 98.4753% ( 5) 00:08:19.228 11090.708 - 11141.120: 98.4916% ( 3) 00:08:19.228 11141.120 - 11191.532: 98.5348% ( 8) 00:08:19.228 11191.532 - 11241.945: 98.5997% ( 12) 00:08:19.228 11241.945 - 11292.357: 98.6538% ( 10) 00:08:19.228 11292.357 - 11342.769: 98.8322% ( 33) 00:08:19.228 11342.769 - 11393.182: 98.8862% ( 10) 00:08:19.228 11393.182 - 11443.594: 98.9133% ( 5) 00:08:19.228 11443.594 - 11494.006: 98.9403% ( 5) 00:08:19.228 11494.006 - 11544.418: 98.9565% ( 3) 00:08:19.228 11544.418 - 11594.831: 98.9619% ( 1) 00:08:19.228 13611.323 - 13712.148: 98.9673% ( 1) 00:08:19.228 14014.622 - 14115.446: 98.9944% ( 5) 00:08:19.228 14115.446 - 14216.271: 99.0268% ( 6) 00:08:19.228 14216.271 - 14317.095: 99.1944% ( 31) 00:08:19.228 14317.095 - 14417.920: 99.2269% ( 6) 00:08:19.229 14417.920 - 14518.745: 99.2539% ( 5) 00:08:19.229 14518.745 - 14619.569: 99.2809% ( 5) 00:08:19.229 14619.569 - 14720.394: 99.3080% ( 5) 00:08:19.229 15829.465 - 15930.289: 99.3566% ( 9) 00:08:19.229 15930.289 - 16031.114: 99.4161% ( 11) 00:08:19.229 16031.114 - 16131.938: 99.4647% ( 9) 00:08:19.229 16131.938 - 16232.763: 99.4864% ( 4) 00:08:19.229 16333.588 - 16434.412: 99.5188% ( 6) 00:08:19.229 16434.412 - 16535.237: 99.5513% ( 6) 00:08:19.229 16535.237 - 16636.062: 99.5837% ( 6) 00:08:19.229 16636.062 - 16736.886: 99.6107% ( 5) 00:08:19.229 16736.886 - 16837.711: 99.6432% ( 6) 00:08:19.229 16837.711 - 16938.535: 99.6540% ( 2) 00:08:19.229 20870.695 - 20971.520: 99.6756% ( 4) 00:08:19.229 20971.520 - 21072.345: 99.7026% ( 5) 00:08:19.229 21072.345 - 21173.169: 99.8270% ( 23) 00:08:19.229 21576.468 - 21677.292: 99.8594% ( 6) 00:08:19.229 21677.292 - 21778.117: 99.8919% ( 6) 00:08:19.229 21778.117 - 21878.942: 99.9243% ( 6) 00:08:19.229 21878.942 - 21979.766: 99.9567% ( 6) 00:08:19.229 21979.766 - 22080.591: 99.9892% ( 6) 00:08:19.229 22080.591 - 22181.415: 100.0000% ( 2) 00:08:19.229 00:08:19.229 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:19.229 ============================================================================== 00:08:19.229 Range in us Cumulative IO count 00:08:19.229 4915.200 - 4940.406: 0.0216% ( 4) 00:08:19.229 4940.406 - 4965.612: 0.0541% ( 6) 00:08:19.229 4965.612 - 4990.818: 0.0811% ( 5) 00:08:19.229 4990.818 - 5016.025: 0.1298% ( 9) 00:08:19.229 5016.025 - 5041.231: 0.1784% ( 9) 00:08:19.229 5041.231 - 5066.437: 0.2433% ( 12) 00:08:19.229 5066.437 - 5091.643: 0.2649% ( 4) 00:08:19.229 5091.643 - 5116.849: 0.2757% ( 2) 00:08:19.229 5116.849 - 5142.055: 0.2920% ( 3) 00:08:19.229 5142.055 - 5167.262: 0.3028% ( 2) 00:08:19.229 5167.262 - 5192.468: 0.3136% ( 2) 00:08:19.229 5192.468 - 5217.674: 0.3298% ( 3) 00:08:19.229 5217.674 - 5242.880: 0.3406% ( 2) 00:08:19.229 5242.880 - 5268.086: 0.3460% ( 1) 00:08:19.229 5570.560 - 5595.766: 0.3514% ( 1) 00:08:19.229 5595.766 - 5620.972: 0.3622% ( 2) 00:08:19.229 5620.972 - 5646.178: 0.4163% ( 10) 00:08:19.229 5646.178 - 5671.385: 0.4704% ( 10) 00:08:19.229 5671.385 - 5696.591: 0.5515% ( 15) 00:08:19.229 5696.591 - 5721.797: 0.6434% ( 17) 00:08:19.229 5721.797 - 5747.003: 0.7407% ( 18) 00:08:19.229 5747.003 - 5772.209: 0.8272% ( 16) 00:08:19.229 5772.209 - 5797.415: 0.9245% ( 18) 00:08:19.229 5797.415 - 5822.622: 1.1408% ( 40) 00:08:19.229 5822.622 - 5847.828: 1.3733% ( 43) 00:08:19.229 5847.828 - 5873.034: 1.5192% ( 27) 00:08:19.229 5873.034 - 5898.240: 1.7625% ( 45) 00:08:19.229 5898.240 - 5923.446: 1.9410% ( 33) 00:08:19.229 5923.446 - 5948.652: 2.0869% ( 27) 00:08:19.229 5948.652 - 5973.858: 2.3086% ( 41) 00:08:19.229 5973.858 - 5999.065: 2.6817% ( 69) 00:08:19.229 5999.065 - 6024.271: 3.0547% ( 69) 00:08:19.229 6024.271 - 6049.477: 3.3142% ( 48) 00:08:19.229 6049.477 - 6074.683: 3.7413% ( 79) 00:08:19.229 6074.683 - 6099.889: 4.1522% ( 76) 00:08:19.229 6099.889 - 6125.095: 4.8929% ( 137) 00:08:19.229 6125.095 - 6150.302: 5.4498% ( 103) 00:08:19.229 6150.302 - 6175.508: 6.1419% ( 128) 00:08:19.229 6175.508 - 6200.714: 6.7204% ( 107) 00:08:19.229 6200.714 - 6225.920: 7.7747% ( 195) 00:08:19.229 6225.920 - 6251.126: 8.9425% ( 216) 00:08:19.229 6251.126 - 6276.332: 10.0184% ( 199) 00:08:19.229 6276.332 - 6301.538: 11.8350% ( 336) 00:08:19.229 6301.538 - 6326.745: 13.7111% ( 347) 00:08:19.229 6326.745 - 6351.951: 15.7169% ( 371) 00:08:19.229 6351.951 - 6377.157: 18.1769% ( 455) 00:08:19.229 6377.157 - 6402.363: 20.6585% ( 459) 00:08:19.229 6402.363 - 6427.569: 24.0971% ( 636) 00:08:19.229 6427.569 - 6452.775: 26.9518% ( 528) 00:08:19.229 6452.775 - 6503.188: 33.3369% ( 1181) 00:08:19.229 6503.188 - 6553.600: 40.9548% ( 1409) 00:08:19.229 6553.600 - 6604.012: 48.5186% ( 1399) 00:08:19.229 6604.012 - 6654.425: 55.8986% ( 1365) 00:08:19.229 6654.425 - 6704.837: 61.4349% ( 1024) 00:08:19.229 6704.837 - 6755.249: 65.5223% ( 756) 00:08:19.229 6755.249 - 6805.662: 69.0095% ( 645) 00:08:19.229 6805.662 - 6856.074: 71.9074% ( 536) 00:08:19.229 6856.074 - 6906.486: 74.5296% ( 485) 00:08:19.229 6906.486 - 6956.898: 76.6220% ( 387) 00:08:19.229 6956.898 - 7007.311: 78.7576% ( 395) 00:08:19.229 7007.311 - 7057.723: 80.5796% ( 337) 00:08:19.229 7057.723 - 7108.135: 81.5852% ( 186) 00:08:19.229 7108.135 - 7158.548: 82.2881% ( 130) 00:08:19.229 7158.548 - 7208.960: 82.9855% ( 129) 00:08:19.229 7208.960 - 7259.372: 83.5856% ( 111) 00:08:19.229 7259.372 - 7309.785: 84.1425% ( 103) 00:08:19.229 7309.785 - 7360.197: 84.6832% ( 100) 00:08:19.229 7360.197 - 7410.609: 85.4239% ( 137) 00:08:19.229 7410.609 - 7461.022: 86.0132% ( 109) 00:08:19.229 7461.022 - 7511.434: 86.7485% ( 136) 00:08:19.229 7511.434 - 7561.846: 87.3000% ( 102) 00:08:19.229 7561.846 - 7612.258: 88.0136% ( 132) 00:08:19.229 7612.258 - 7662.671: 88.5056% ( 91) 00:08:19.229 7662.671 - 7713.083: 88.8679% ( 67) 00:08:19.229 7713.083 - 7763.495: 89.2625% ( 73) 00:08:19.229 7763.495 - 7813.908: 89.7383% ( 88) 00:08:19.229 7813.908 - 7864.320: 90.2141% ( 88) 00:08:19.229 7864.320 - 7914.732: 90.7602% ( 101) 00:08:19.229 7914.732 - 7965.145: 91.3495% ( 109) 00:08:19.229 7965.145 - 8015.557: 91.8469% ( 92) 00:08:19.229 8015.557 - 8065.969: 92.2956% ( 83) 00:08:19.229 8065.969 - 8116.382: 92.7930% ( 92) 00:08:19.229 8116.382 - 8166.794: 93.0850% ( 54) 00:08:19.229 8166.794 - 8217.206: 93.3553% ( 50) 00:08:19.229 8217.206 - 8267.618: 93.6202% ( 49) 00:08:19.229 8267.618 - 8318.031: 93.8906% ( 50) 00:08:19.229 8318.031 - 8368.443: 94.1014% ( 39) 00:08:19.229 8368.443 - 8418.855: 94.2258% ( 23) 00:08:19.229 8418.855 - 8469.268: 94.3663% ( 26) 00:08:19.229 8469.268 - 8519.680: 94.5502% ( 34) 00:08:19.229 8519.680 - 8570.092: 94.6583% ( 20) 00:08:19.229 8570.092 - 8620.505: 94.7124% ( 10) 00:08:19.229 8620.505 - 8670.917: 94.7827% ( 13) 00:08:19.229 8670.917 - 8721.329: 94.9449% ( 30) 00:08:19.229 8721.329 - 8771.742: 95.1179% ( 32) 00:08:19.229 8771.742 - 8822.154: 95.3017% ( 34) 00:08:19.229 8822.154 - 8872.566: 95.4801% ( 33) 00:08:19.229 8872.566 - 8922.978: 95.5828% ( 19) 00:08:19.229 8922.978 - 8973.391: 95.6747% ( 17) 00:08:19.229 8973.391 - 9023.803: 95.8586% ( 34) 00:08:19.229 9023.803 - 9074.215: 95.9667% ( 20) 00:08:19.229 9074.215 - 9124.628: 96.1938% ( 42) 00:08:19.229 9124.628 - 9175.040: 96.2587% ( 12) 00:08:19.229 9175.040 - 9225.452: 96.3289% ( 13) 00:08:19.229 9225.452 - 9275.865: 96.4046% ( 14) 00:08:19.229 9275.865 - 9326.277: 96.4641% ( 11) 00:08:19.229 9326.277 - 9376.689: 96.5885% ( 23) 00:08:19.229 9376.689 - 9427.102: 96.7561% ( 31) 00:08:19.229 9427.102 - 9477.514: 96.8750% ( 22) 00:08:19.229 9477.514 - 9527.926: 96.9345% ( 11) 00:08:19.229 9527.926 - 9578.338: 96.9885% ( 10) 00:08:19.229 9578.338 - 9628.751: 97.0426% ( 10) 00:08:19.229 9628.751 - 9679.163: 97.0913% ( 9) 00:08:19.229 9679.163 - 9729.575: 97.1237% ( 6) 00:08:19.229 9729.575 - 9779.988: 97.1778% ( 10) 00:08:19.229 9779.988 - 9830.400: 97.2318% ( 10) 00:08:19.229 9830.400 - 9880.812: 97.3021% ( 13) 00:08:19.229 9880.812 - 9931.225: 97.3778% ( 14) 00:08:19.229 9931.225 - 9981.637: 97.4535% ( 14) 00:08:19.229 9981.637 - 10032.049: 97.5779% ( 23) 00:08:19.229 10032.049 - 10082.462: 97.6698% ( 17) 00:08:19.229 10082.462 - 10132.874: 97.7617% ( 17) 00:08:19.229 10132.874 - 10183.286: 97.8644% ( 19) 00:08:19.229 10183.286 - 10233.698: 97.8968% ( 6) 00:08:19.229 10233.698 - 10284.111: 97.9401% ( 8) 00:08:19.229 10284.111 - 10334.523: 97.9779% ( 7) 00:08:19.229 10334.523 - 10384.935: 98.0158% ( 7) 00:08:19.229 10384.935 - 10435.348: 98.0536% ( 7) 00:08:19.229 10435.348 - 10485.760: 98.0753% ( 4) 00:08:19.229 10485.760 - 10536.172: 98.1023% ( 5) 00:08:19.229 10536.172 - 10586.585: 98.1293% ( 5) 00:08:19.229 10586.585 - 10636.997: 98.1564% ( 5) 00:08:19.229 10636.997 - 10687.409: 98.1834% ( 5) 00:08:19.229 10687.409 - 10737.822: 98.2050% ( 4) 00:08:19.229 10737.822 - 10788.234: 98.2321% ( 5) 00:08:19.229 10788.234 - 10838.646: 98.2591% ( 5) 00:08:19.229 10838.646 - 10889.058: 98.2861% ( 5) 00:08:19.229 10889.058 - 10939.471: 98.3186% ( 6) 00:08:19.229 10939.471 - 10989.883: 98.3294% ( 2) 00:08:19.229 10989.883 - 11040.295: 98.3402% ( 2) 00:08:19.229 11040.295 - 11090.708: 98.3510% ( 2) 00:08:19.229 11090.708 - 11141.120: 98.3726% ( 4) 00:08:19.229 11141.120 - 11191.532: 98.3834% ( 2) 00:08:19.229 11191.532 - 11241.945: 98.3942% ( 2) 00:08:19.229 11241.945 - 11292.357: 98.4051% ( 2) 00:08:19.229 11292.357 - 11342.769: 98.4159% ( 2) 00:08:19.229 11342.769 - 11393.182: 98.4375% ( 4) 00:08:19.229 11393.182 - 11443.594: 98.4645% ( 5) 00:08:19.229 11443.594 - 11494.006: 98.5294% ( 12) 00:08:19.229 11494.006 - 11544.418: 98.5943% ( 12) 00:08:19.229 11544.418 - 11594.831: 98.7835% ( 35) 00:08:19.229 11594.831 - 11645.243: 98.8646% ( 15) 00:08:19.229 11645.243 - 11695.655: 98.9025% ( 7) 00:08:19.229 11695.655 - 11746.068: 98.9295% ( 5) 00:08:19.229 11746.068 - 11796.480: 98.9403% ( 2) 00:08:19.229 11796.480 - 11846.892: 98.9511% ( 2) 00:08:19.229 11846.892 - 11897.305: 98.9619% ( 2) 00:08:19.229 13208.025 - 13308.849: 98.9944% ( 6) 00:08:19.229 13308.849 - 13409.674: 99.0430% ( 9) 00:08:19.229 13409.674 - 13510.498: 99.0755% ( 6) 00:08:19.229 13510.498 - 13611.323: 99.1566% ( 15) 00:08:19.230 13611.323 - 13712.148: 99.1836% ( 5) 00:08:19.230 13712.148 - 13812.972: 99.1998% ( 3) 00:08:19.230 13812.972 - 13913.797: 99.2215% ( 4) 00:08:19.230 13913.797 - 14014.622: 99.2431% ( 4) 00:08:19.230 14014.622 - 14115.446: 99.2647% ( 4) 00:08:19.230 14115.446 - 14216.271: 99.2863% ( 4) 00:08:19.230 14216.271 - 14317.095: 99.3080% ( 4) 00:08:19.230 16031.114 - 16131.938: 99.3404% ( 6) 00:08:19.230 16131.938 - 16232.763: 99.3728% ( 6) 00:08:19.230 16232.763 - 16333.588: 99.4107% ( 7) 00:08:19.230 16333.588 - 16434.412: 99.4485% ( 7) 00:08:19.230 16434.412 - 16535.237: 99.4810% ( 6) 00:08:19.230 16535.237 - 16636.062: 99.5188% ( 7) 00:08:19.230 16636.062 - 16736.886: 99.5567% ( 7) 00:08:19.230 16736.886 - 16837.711: 99.5945% ( 7) 00:08:19.230 16837.711 - 16938.535: 99.6324% ( 7) 00:08:19.230 16938.535 - 17039.360: 99.6540% ( 4) 00:08:19.230 20568.222 - 20669.046: 99.6810% ( 5) 00:08:19.230 20669.046 - 20769.871: 99.7135% ( 6) 00:08:19.230 20769.871 - 20870.695: 99.7513% ( 7) 00:08:19.230 20870.695 - 20971.520: 99.7891% ( 7) 00:08:19.230 20971.520 - 21072.345: 99.8162% ( 5) 00:08:19.230 21072.345 - 21173.169: 99.8540% ( 7) 00:08:19.230 21173.169 - 21273.994: 99.8919% ( 7) 00:08:19.230 21273.994 - 21374.818: 99.9297% ( 7) 00:08:19.230 21374.818 - 21475.643: 99.9622% ( 6) 00:08:19.230 21475.643 - 21576.468: 100.0000% ( 7) 00:08:19.230 00:08:19.230 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:19.230 ============================================================================== 00:08:19.230 Range in us Cumulative IO count 00:08:19.230 4461.489 - 4486.695: 0.0216% ( 4) 00:08:19.230 4486.695 - 4511.902: 0.0433% ( 4) 00:08:19.230 4511.902 - 4537.108: 0.0703% ( 5) 00:08:19.230 4537.108 - 4562.314: 0.0865% ( 3) 00:08:19.230 4562.314 - 4587.520: 0.1027% ( 3) 00:08:19.230 4587.520 - 4612.726: 0.1244% ( 4) 00:08:19.230 4612.726 - 4637.932: 0.1460% ( 4) 00:08:19.230 4637.932 - 4663.138: 0.1676% ( 4) 00:08:19.230 4663.138 - 4688.345: 0.1838% ( 3) 00:08:19.230 4688.345 - 4713.551: 0.1892% ( 1) 00:08:19.230 4713.551 - 4738.757: 0.1946% ( 1) 00:08:19.230 4738.757 - 4763.963: 0.2054% ( 2) 00:08:19.230 4763.963 - 4789.169: 0.2109% ( 1) 00:08:19.230 4789.169 - 4814.375: 0.2217% ( 2) 00:08:19.230 4814.375 - 4839.582: 0.2271% ( 1) 00:08:19.230 4839.582 - 4864.788: 0.2379% ( 2) 00:08:19.230 4864.788 - 4889.994: 0.3406% ( 19) 00:08:19.230 5091.643 - 5116.849: 0.3460% ( 1) 00:08:19.230 5394.117 - 5419.323: 0.3731% ( 5) 00:08:19.230 5419.323 - 5444.529: 0.3893% ( 3) 00:08:19.230 5469.735 - 5494.942: 0.3947% ( 1) 00:08:19.230 5494.942 - 5520.148: 0.4109% ( 3) 00:08:19.230 5520.148 - 5545.354: 0.4920% ( 15) 00:08:19.230 5545.354 - 5570.560: 0.5569% ( 12) 00:08:19.230 5570.560 - 5595.766: 0.5785% ( 4) 00:08:19.230 5595.766 - 5620.972: 0.6001% ( 4) 00:08:19.230 5620.972 - 5646.178: 0.6488% ( 9) 00:08:19.230 5646.178 - 5671.385: 0.7515% ( 19) 00:08:19.230 5671.385 - 5696.591: 0.8002% ( 9) 00:08:19.230 5696.591 - 5721.797: 0.8921% ( 17) 00:08:19.230 5721.797 - 5747.003: 0.9407% ( 9) 00:08:19.230 5747.003 - 5772.209: 1.0813% ( 26) 00:08:19.230 5772.209 - 5797.415: 1.2219% ( 26) 00:08:19.230 5797.415 - 5822.622: 1.4327% ( 39) 00:08:19.230 5822.622 - 5847.828: 1.7301% ( 55) 00:08:19.230 5847.828 - 5873.034: 1.9788% ( 46) 00:08:19.230 5873.034 - 5898.240: 2.2762% ( 55) 00:08:19.230 5898.240 - 5923.446: 2.6871% ( 76) 00:08:19.230 5923.446 - 5948.652: 2.9736% ( 53) 00:08:19.230 5948.652 - 5973.858: 3.2710% ( 55) 00:08:19.230 5973.858 - 5999.065: 3.5629% ( 54) 00:08:19.230 5999.065 - 6024.271: 3.9306% ( 68) 00:08:19.230 6024.271 - 6049.477: 4.4550% ( 97) 00:08:19.230 6049.477 - 6074.683: 5.3201% ( 160) 00:08:19.230 6074.683 - 6099.889: 6.0986% ( 144) 00:08:19.230 6099.889 - 6125.095: 6.9529% ( 158) 00:08:19.230 6125.095 - 6150.302: 8.2883% ( 247) 00:08:19.230 6150.302 - 6175.508: 9.5318% ( 230) 00:08:19.230 6175.508 - 6200.714: 10.6780% ( 212) 00:08:19.230 6200.714 - 6225.920: 12.2405% ( 289) 00:08:19.230 6225.920 - 6251.126: 13.8733% ( 302) 00:08:19.230 6251.126 - 6276.332: 15.6683% ( 332) 00:08:19.230 6276.332 - 6301.538: 18.0201% ( 435) 00:08:19.230 6301.538 - 6326.745: 20.7018% ( 496) 00:08:19.230 6326.745 - 6351.951: 23.4051% ( 500) 00:08:19.230 6351.951 - 6377.157: 26.7031% ( 610) 00:08:19.230 6377.157 - 6402.363: 29.5091% ( 519) 00:08:19.230 6402.363 - 6427.569: 31.9745% ( 456) 00:08:19.230 6427.569 - 6452.775: 34.4669% ( 461) 00:08:19.230 6452.775 - 6503.188: 39.4193% ( 916) 00:08:19.230 6503.188 - 6553.600: 43.9284% ( 834) 00:08:19.230 6553.600 - 6604.012: 48.1455% ( 780) 00:08:19.230 6604.012 - 6654.425: 52.5465% ( 814) 00:08:19.230 6654.425 - 6704.837: 56.2067% ( 677) 00:08:19.230 6704.837 - 6755.249: 59.7751% ( 660) 00:08:19.230 6755.249 - 6805.662: 63.0515% ( 606) 00:08:19.230 6805.662 - 6856.074: 66.6090% ( 658) 00:08:19.230 6856.074 - 6906.486: 69.8421% ( 598) 00:08:19.230 6906.486 - 6956.898: 72.6535% ( 520) 00:08:19.230 6956.898 - 7007.311: 74.9027% ( 416) 00:08:19.230 7007.311 - 7057.723: 77.0167% ( 391) 00:08:19.230 7057.723 - 7108.135: 78.8711% ( 343) 00:08:19.230 7108.135 - 7158.548: 80.3579% ( 275) 00:08:19.230 7158.548 - 7208.960: 81.5420% ( 219) 00:08:19.230 7208.960 - 7259.372: 82.6773% ( 210) 00:08:19.230 7259.372 - 7309.785: 83.5045% ( 153) 00:08:19.230 7309.785 - 7360.197: 84.2236% ( 133) 00:08:19.230 7360.197 - 7410.609: 84.8454% ( 115) 00:08:19.230 7410.609 - 7461.022: 85.7212% ( 162) 00:08:19.230 7461.022 - 7511.434: 86.3971% ( 125) 00:08:19.230 7511.434 - 7561.846: 87.0458% ( 120) 00:08:19.230 7561.846 - 7612.258: 87.7595% ( 132) 00:08:19.230 7612.258 - 7662.671: 88.3380% ( 107) 00:08:19.230 7662.671 - 7713.083: 88.7868% ( 83) 00:08:19.230 7713.083 - 7763.495: 89.2679% ( 89) 00:08:19.230 7763.495 - 7813.908: 89.8681% ( 111) 00:08:19.230 7813.908 - 7864.320: 90.4520% ( 108) 00:08:19.230 7864.320 - 7914.732: 90.8845% ( 80) 00:08:19.230 7914.732 - 7965.145: 91.3549% ( 87) 00:08:19.230 7965.145 - 8015.557: 91.7225% ( 68) 00:08:19.230 8015.557 - 8065.969: 92.1280% ( 75) 00:08:19.230 8065.969 - 8116.382: 92.5551% ( 79) 00:08:19.230 8116.382 - 8166.794: 92.8904% ( 62) 00:08:19.230 8166.794 - 8217.206: 93.1391% ( 46) 00:08:19.230 8217.206 - 8267.618: 93.3932% ( 47) 00:08:19.230 8267.618 - 8318.031: 93.6040% ( 39) 00:08:19.230 8318.031 - 8368.443: 93.8203% ( 40) 00:08:19.230 8368.443 - 8418.855: 94.0149% ( 36) 00:08:19.230 8418.855 - 8469.268: 94.2096% ( 36) 00:08:19.230 8469.268 - 8519.680: 94.3772% ( 31) 00:08:19.230 8519.680 - 8570.092: 94.5610% ( 34) 00:08:19.230 8570.092 - 8620.505: 94.6799% ( 22) 00:08:19.230 8620.505 - 8670.917: 94.7718% ( 17) 00:08:19.230 8670.917 - 8721.329: 94.8908% ( 22) 00:08:19.230 8721.329 - 8771.742: 95.1287% ( 44) 00:08:19.230 8771.742 - 8822.154: 95.2692% ( 26) 00:08:19.230 8822.154 - 8872.566: 95.4206% ( 28) 00:08:19.230 8872.566 - 8922.978: 95.5612% ( 26) 00:08:19.230 8922.978 - 8973.391: 95.6639% ( 19) 00:08:19.230 8973.391 - 9023.803: 95.7829% ( 22) 00:08:19.230 9023.803 - 9074.215: 95.9613% ( 33) 00:08:19.230 9074.215 - 9124.628: 96.0586% ( 18) 00:08:19.230 9124.628 - 9175.040: 96.1343% ( 14) 00:08:19.230 9175.040 - 9225.452: 96.2100% ( 14) 00:08:19.230 9225.452 - 9275.865: 96.3614% ( 28) 00:08:19.230 9275.865 - 9326.277: 96.5452% ( 34) 00:08:19.230 9326.277 - 9376.689: 96.6696% ( 23) 00:08:19.230 9376.689 - 9427.102: 96.7777% ( 20) 00:08:19.230 9427.102 - 9477.514: 96.8804% ( 19) 00:08:19.230 9477.514 - 9527.926: 96.9831% ( 19) 00:08:19.230 9527.926 - 9578.338: 97.0642% ( 15) 00:08:19.230 9578.338 - 9628.751: 97.1615% ( 18) 00:08:19.230 9628.751 - 9679.163: 97.2426% ( 15) 00:08:19.230 9679.163 - 9729.575: 97.3292% ( 16) 00:08:19.230 9729.575 - 9779.988: 97.4265% ( 18) 00:08:19.230 9779.988 - 9830.400: 97.5292% ( 19) 00:08:19.230 9830.400 - 9880.812: 97.5995% ( 13) 00:08:19.230 9880.812 - 9931.225: 97.6644% ( 12) 00:08:19.230 9931.225 - 9981.637: 97.7292% ( 12) 00:08:19.230 9981.637 - 10032.049: 97.7779% ( 9) 00:08:19.230 10032.049 - 10082.462: 97.8374% ( 11) 00:08:19.230 10082.462 - 10132.874: 97.9077% ( 13) 00:08:19.230 10132.874 - 10183.286: 97.9725% ( 12) 00:08:19.230 10183.286 - 10233.698: 98.0104% ( 7) 00:08:19.230 10233.698 - 10284.111: 98.0374% ( 5) 00:08:19.230 10284.111 - 10334.523: 98.0699% ( 6) 00:08:19.230 10334.523 - 10384.935: 98.0861% ( 3) 00:08:19.230 10384.935 - 10435.348: 98.1185% ( 6) 00:08:19.230 10435.348 - 10485.760: 98.1401% ( 4) 00:08:19.230 10485.760 - 10536.172: 98.1564% ( 3) 00:08:19.230 10536.172 - 10586.585: 98.1780% ( 4) 00:08:19.230 10586.585 - 10636.997: 98.2050% ( 5) 00:08:19.230 10636.997 - 10687.409: 98.2104% ( 1) 00:08:19.230 10687.409 - 10737.822: 98.2266% ( 3) 00:08:19.230 10737.822 - 10788.234: 98.2375% ( 2) 00:08:19.230 10788.234 - 10838.646: 98.2483% ( 2) 00:08:19.230 10838.646 - 10889.058: 98.2537% ( 1) 00:08:19.230 10889.058 - 10939.471: 98.2699% ( 3) 00:08:19.230 11241.945 - 11292.357: 98.2807% ( 2) 00:08:19.230 11292.357 - 11342.769: 98.2915% ( 2) 00:08:19.230 11342.769 - 11393.182: 98.3294% ( 7) 00:08:19.230 11393.182 - 11443.594: 98.3618% ( 6) 00:08:19.230 11443.594 - 11494.006: 98.3942% ( 6) 00:08:19.230 11494.006 - 11544.418: 98.4321% ( 7) 00:08:19.230 11544.418 - 11594.831: 98.4591% ( 5) 00:08:19.231 11594.831 - 11645.243: 98.4862% ( 5) 00:08:19.231 11645.243 - 11695.655: 98.5024% ( 3) 00:08:19.231 11695.655 - 11746.068: 98.5240% ( 4) 00:08:19.231 11746.068 - 11796.480: 98.5510% ( 5) 00:08:19.231 11796.480 - 11846.892: 98.5781% ( 5) 00:08:19.231 11846.892 - 11897.305: 98.5943% ( 3) 00:08:19.231 11897.305 - 11947.717: 98.6159% ( 4) 00:08:19.231 11947.717 - 11998.129: 98.6429% ( 5) 00:08:19.231 11998.129 - 12048.542: 98.6538% ( 2) 00:08:19.231 12048.542 - 12098.954: 98.6754% ( 4) 00:08:19.231 12098.954 - 12149.366: 98.7024% ( 5) 00:08:19.231 12149.366 - 12199.778: 98.7240% ( 4) 00:08:19.231 12199.778 - 12250.191: 98.7727% ( 9) 00:08:19.231 12250.191 - 12300.603: 98.8268% ( 10) 00:08:19.231 12300.603 - 12351.015: 98.8592% ( 6) 00:08:19.231 12351.015 - 12401.428: 98.9079% ( 9) 00:08:19.231 12401.428 - 12451.840: 98.9565% ( 9) 00:08:19.231 12451.840 - 12502.252: 98.9998% ( 8) 00:08:19.231 12502.252 - 12552.665: 99.0052% ( 1) 00:08:19.231 12552.665 - 12603.077: 99.0322% ( 5) 00:08:19.231 12603.077 - 12653.489: 99.0593% ( 5) 00:08:19.231 12653.489 - 12703.902: 99.0701% ( 2) 00:08:19.231 12703.902 - 12754.314: 99.0809% ( 2) 00:08:19.231 12754.314 - 12804.726: 99.1025% ( 4) 00:08:19.231 12804.726 - 12855.138: 99.1187% ( 3) 00:08:19.231 12855.138 - 12905.551: 99.1512% ( 6) 00:08:19.231 12905.551 - 13006.375: 99.1782% ( 5) 00:08:19.231 13006.375 - 13107.200: 99.1998% ( 4) 00:08:19.231 13107.200 - 13208.025: 99.2106% ( 2) 00:08:19.231 13208.025 - 13308.849: 99.2215% ( 2) 00:08:19.231 13308.849 - 13409.674: 99.2377% ( 3) 00:08:19.231 13409.674 - 13510.498: 99.2539% ( 3) 00:08:19.231 13510.498 - 13611.323: 99.2701% ( 3) 00:08:19.231 13611.323 - 13712.148: 99.2809% ( 2) 00:08:19.231 13712.148 - 13812.972: 99.2917% ( 2) 00:08:19.231 13812.972 - 13913.797: 99.3080% ( 3) 00:08:19.231 15728.640 - 15829.465: 99.3188% ( 2) 00:08:19.231 15829.465 - 15930.289: 99.3458% ( 5) 00:08:19.231 15930.289 - 16031.114: 99.3620% ( 3) 00:08:19.231 16031.114 - 16131.938: 99.3782% ( 3) 00:08:19.231 16131.938 - 16232.763: 99.3945% ( 3) 00:08:19.231 16232.763 - 16333.588: 99.4161% ( 4) 00:08:19.231 16333.588 - 16434.412: 99.4323% ( 3) 00:08:19.231 16434.412 - 16535.237: 99.4485% ( 3) 00:08:19.231 16535.237 - 16636.062: 99.4702% ( 4) 00:08:19.231 16837.711 - 16938.535: 99.5134% ( 8) 00:08:19.231 16938.535 - 17039.360: 99.5891% ( 14) 00:08:19.231 17039.360 - 17140.185: 99.6540% ( 12) 00:08:19.231 19660.800 - 19761.625: 99.6594% ( 1) 00:08:19.231 19761.625 - 19862.449: 99.6864% ( 5) 00:08:19.231 19862.449 - 19963.274: 99.7459% ( 11) 00:08:19.231 19963.274 - 20064.098: 99.8054% ( 11) 00:08:19.231 20064.098 - 20164.923: 99.8162% ( 2) 00:08:19.231 20164.923 - 20265.748: 99.8648% ( 9) 00:08:19.231 20265.748 - 20366.572: 99.9513% ( 16) 00:08:19.231 20366.572 - 20467.397: 100.0000% ( 9) 00:08:19.231 00:08:19.231 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:19.231 ============================================================================== 00:08:19.231 Range in us Cumulative IO count 00:08:19.231 4285.046 - 4310.252: 0.0270% ( 5) 00:08:19.231 4310.252 - 4335.458: 0.0811% ( 10) 00:08:19.231 4335.458 - 4360.665: 0.1352% ( 10) 00:08:19.231 4360.665 - 4385.871: 0.2000% ( 12) 00:08:19.231 4385.871 - 4411.077: 0.2325% ( 6) 00:08:19.231 4411.077 - 4436.283: 0.2487% ( 3) 00:08:19.231 4436.283 - 4461.489: 0.2595% ( 2) 00:08:19.231 4461.489 - 4486.695: 0.2703% ( 2) 00:08:19.231 4486.695 - 4511.902: 0.2865% ( 3) 00:08:19.231 4511.902 - 4537.108: 0.2920% ( 1) 00:08:19.231 4537.108 - 4562.314: 0.3028% ( 2) 00:08:19.231 4562.314 - 4587.520: 0.3190% ( 3) 00:08:19.231 4587.520 - 4612.726: 0.3298% ( 2) 00:08:19.231 4612.726 - 4637.932: 0.3460% ( 3) 00:08:19.231 5444.529 - 5469.735: 0.3514% ( 1) 00:08:19.231 5520.148 - 5545.354: 0.3568% ( 1) 00:08:19.231 5620.972 - 5646.178: 0.3622% ( 1) 00:08:19.231 5646.178 - 5671.385: 0.3785% ( 3) 00:08:19.231 5671.385 - 5696.591: 0.3893% ( 2) 00:08:19.231 5696.591 - 5721.797: 0.4055% ( 3) 00:08:19.231 5721.797 - 5747.003: 0.4379% ( 6) 00:08:19.231 5747.003 - 5772.209: 0.5190% ( 15) 00:08:19.231 5772.209 - 5797.415: 0.7948% ( 51) 00:08:19.231 5797.415 - 5822.622: 0.9516% ( 29) 00:08:19.231 5822.622 - 5847.828: 1.4165% ( 86) 00:08:19.231 5847.828 - 5873.034: 1.6814% ( 49) 00:08:19.231 5873.034 - 5898.240: 2.0383% ( 66) 00:08:19.231 5898.240 - 5923.446: 2.2113% ( 32) 00:08:19.231 5923.446 - 5948.652: 2.4005% ( 35) 00:08:19.231 5948.652 - 5973.858: 2.6330% ( 43) 00:08:19.231 5973.858 - 5999.065: 3.0331% ( 74) 00:08:19.231 5999.065 - 6024.271: 3.2980% ( 49) 00:08:19.231 6024.271 - 6049.477: 3.5467% ( 46) 00:08:19.231 6049.477 - 6074.683: 3.9576% ( 76) 00:08:19.231 6074.683 - 6099.889: 4.3090% ( 65) 00:08:19.231 6099.889 - 6125.095: 4.9686% ( 122) 00:08:19.231 6125.095 - 6150.302: 5.4823% ( 95) 00:08:19.231 6150.302 - 6175.508: 6.0283% ( 101) 00:08:19.231 6175.508 - 6200.714: 6.8988% ( 161) 00:08:19.231 6200.714 - 6225.920: 8.0017% ( 204) 00:08:19.231 6225.920 - 6251.126: 9.0939% ( 202) 00:08:19.231 6251.126 - 6276.332: 10.3914% ( 240) 00:08:19.231 6276.332 - 6301.538: 11.7485% ( 251) 00:08:19.231 6301.538 - 6326.745: 13.3975% ( 305) 00:08:19.231 6326.745 - 6351.951: 15.7602% ( 437) 00:08:19.231 6351.951 - 6377.157: 18.4580% ( 499) 00:08:19.231 6377.157 - 6402.363: 20.9234% ( 456) 00:08:19.231 6402.363 - 6427.569: 23.7457% ( 522) 00:08:19.231 6427.569 - 6452.775: 26.7031% ( 547) 00:08:19.231 6452.775 - 6503.188: 33.4234% ( 1243) 00:08:19.231 6503.188 - 6553.600: 41.3116% ( 1459) 00:08:19.231 6553.600 - 6604.012: 49.5458% ( 1523) 00:08:19.231 6604.012 - 6654.425: 56.8663% ( 1354) 00:08:19.231 6654.425 - 6704.837: 62.6081% ( 1062) 00:08:19.231 6704.837 - 6755.249: 66.5874% ( 736) 00:08:19.231 6755.249 - 6805.662: 69.8908% ( 611) 00:08:19.231 6805.662 - 6856.074: 72.4589% ( 475) 00:08:19.231 6856.074 - 6906.486: 74.5350% ( 384) 00:08:19.231 6906.486 - 6956.898: 76.4868% ( 361) 00:08:19.231 6956.898 - 7007.311: 78.1088% ( 300) 00:08:19.231 7007.311 - 7057.723: 79.9470% ( 340) 00:08:19.231 7057.723 - 7108.135: 81.4609% ( 280) 00:08:19.231 7108.135 - 7158.548: 82.4340% ( 180) 00:08:19.231 7158.548 - 7208.960: 83.1964% ( 141) 00:08:19.231 7208.960 - 7259.372: 83.9641% ( 142) 00:08:19.231 7259.372 - 7309.785: 84.3263% ( 67) 00:08:19.231 7309.785 - 7360.197: 84.8075% ( 89) 00:08:19.231 7360.197 - 7410.609: 85.3482% ( 100) 00:08:19.231 7410.609 - 7461.022: 85.8726% ( 97) 00:08:19.231 7461.022 - 7511.434: 86.2295% ( 66) 00:08:19.231 7511.434 - 7561.846: 86.5809% ( 65) 00:08:19.231 7561.846 - 7612.258: 87.2351% ( 121) 00:08:19.231 7612.258 - 7662.671: 87.7757% ( 100) 00:08:19.231 7662.671 - 7713.083: 88.3651% ( 109) 00:08:19.231 7713.083 - 7763.495: 88.9652% ( 111) 00:08:19.231 7763.495 - 7813.908: 89.7329% ( 142) 00:08:19.231 7813.908 - 7864.320: 90.2898% ( 103) 00:08:19.231 7864.320 - 7914.732: 90.8521% ( 104) 00:08:19.231 7914.732 - 7965.145: 91.3657% ( 95) 00:08:19.231 7965.145 - 8015.557: 92.0523% ( 127) 00:08:19.231 8015.557 - 8065.969: 92.6362% ( 108) 00:08:19.231 8065.969 - 8116.382: 93.0363% ( 74) 00:08:19.231 8116.382 - 8166.794: 93.2526% ( 40) 00:08:19.231 8166.794 - 8217.206: 93.5878% ( 62) 00:08:19.231 8217.206 - 8267.618: 93.7284% ( 26) 00:08:19.231 8267.618 - 8318.031: 93.9068% ( 33) 00:08:19.231 8318.031 - 8368.443: 94.0852% ( 33) 00:08:19.231 8368.443 - 8418.855: 94.1879% ( 19) 00:08:19.231 8418.855 - 8469.268: 94.2798% ( 17) 00:08:19.231 8469.268 - 8519.680: 94.4204% ( 26) 00:08:19.231 8519.680 - 8570.092: 94.5123% ( 17) 00:08:19.231 8570.092 - 8620.505: 94.6421% ( 24) 00:08:19.231 8620.505 - 8670.917: 94.7772% ( 25) 00:08:19.231 8670.917 - 8721.329: 94.8367% ( 11) 00:08:19.231 8721.329 - 8771.742: 94.9016% ( 12) 00:08:19.231 8771.742 - 8822.154: 95.0260% ( 23) 00:08:19.231 8822.154 - 8872.566: 95.1503% ( 23) 00:08:19.231 8872.566 - 8922.978: 95.2584% ( 20) 00:08:19.231 8922.978 - 8973.391: 95.4639% ( 38) 00:08:19.231 8973.391 - 9023.803: 95.7667% ( 56) 00:08:19.231 9023.803 - 9074.215: 95.8856% ( 22) 00:08:19.231 9074.215 - 9124.628: 96.0478% ( 30) 00:08:19.231 9124.628 - 9175.040: 96.3181% ( 50) 00:08:19.231 9175.040 - 9225.452: 96.5776% ( 48) 00:08:19.231 9225.452 - 9275.865: 96.7020% ( 23) 00:08:19.231 9275.865 - 9326.277: 96.7615% ( 11) 00:08:19.231 9326.277 - 9376.689: 97.0156% ( 47) 00:08:19.231 9376.689 - 9427.102: 97.0859% ( 13) 00:08:19.231 9427.102 - 9477.514: 97.2318% ( 27) 00:08:19.231 9477.514 - 9527.926: 97.3724% ( 26) 00:08:19.231 9527.926 - 9578.338: 97.4697% ( 18) 00:08:19.231 9578.338 - 9628.751: 97.5292% ( 11) 00:08:19.231 9628.751 - 9679.163: 97.5616% ( 6) 00:08:19.231 9679.163 - 9729.575: 97.6049% ( 8) 00:08:19.231 9729.575 - 9779.988: 97.6698% ( 12) 00:08:19.231 9779.988 - 9830.400: 97.7346% ( 12) 00:08:19.231 9830.400 - 9880.812: 97.8157% ( 15) 00:08:19.231 9880.812 - 9931.225: 97.9077% ( 17) 00:08:19.231 9931.225 - 9981.637: 98.0104% ( 19) 00:08:19.231 9981.637 - 10032.049: 98.0536% ( 8) 00:08:19.231 10032.049 - 10082.462: 98.0861% ( 6) 00:08:19.231 10082.462 - 10132.874: 98.1185% ( 6) 00:08:19.231 10132.874 - 10183.286: 98.1510% ( 6) 00:08:19.231 10183.286 - 10233.698: 98.1834% ( 6) 00:08:19.231 10233.698 - 10284.111: 98.2158% ( 6) 00:08:19.231 10284.111 - 10334.523: 98.2429% ( 5) 00:08:19.232 10334.523 - 10384.935: 98.2591% ( 3) 00:08:19.232 10384.935 - 10435.348: 98.2699% ( 2) 00:08:19.232 11494.006 - 11544.418: 98.2807% ( 2) 00:08:19.232 11544.418 - 11594.831: 98.2969% ( 3) 00:08:19.232 11594.831 - 11645.243: 98.3240% ( 5) 00:08:19.232 11645.243 - 11695.655: 98.3402% ( 3) 00:08:19.232 11695.655 - 11746.068: 98.3564% ( 3) 00:08:19.232 11746.068 - 11796.480: 98.3726% ( 3) 00:08:19.232 11796.480 - 11846.892: 98.3942% ( 4) 00:08:19.232 11846.892 - 11897.305: 98.4105% ( 3) 00:08:19.232 11897.305 - 11947.717: 98.4321% ( 4) 00:08:19.232 11947.717 - 11998.129: 98.4862% ( 10) 00:08:19.232 11998.129 - 12048.542: 98.6375% ( 28) 00:08:19.232 12048.542 - 12098.954: 98.7132% ( 14) 00:08:19.232 12098.954 - 12149.366: 98.7457% ( 6) 00:08:19.232 12149.366 - 12199.778: 98.7673% ( 4) 00:08:19.232 12199.778 - 12250.191: 98.7943% ( 5) 00:08:19.232 12250.191 - 12300.603: 98.8268% ( 6) 00:08:19.232 12300.603 - 12351.015: 98.8538% ( 5) 00:08:19.232 12351.015 - 12401.428: 98.8808% ( 5) 00:08:19.232 12401.428 - 12451.840: 98.9187% ( 7) 00:08:19.232 12451.840 - 12502.252: 98.9565% ( 7) 00:08:19.232 12502.252 - 12552.665: 98.9890% ( 6) 00:08:19.232 12552.665 - 12603.077: 99.0160% ( 5) 00:08:19.232 12603.077 - 12653.489: 99.0322% ( 3) 00:08:19.232 12653.489 - 12703.902: 99.0430% ( 2) 00:08:19.232 12703.902 - 12754.314: 99.0538% ( 2) 00:08:19.232 12754.314 - 12804.726: 99.0701% ( 3) 00:08:19.232 12804.726 - 12855.138: 99.0863% ( 3) 00:08:19.232 12855.138 - 12905.551: 99.0971% ( 2) 00:08:19.232 12905.551 - 13006.375: 99.1241% ( 5) 00:08:19.232 13006.375 - 13107.200: 99.2431% ( 22) 00:08:19.232 13107.200 - 13208.025: 99.2863% ( 8) 00:08:19.232 13208.025 - 13308.849: 99.3080% ( 4) 00:08:19.232 16131.938 - 16232.763: 99.3620% ( 10) 00:08:19.232 16232.763 - 16333.588: 99.6269% ( 49) 00:08:19.232 16333.588 - 16434.412: 99.6540% ( 5) 00:08:19.232 18955.028 - 19055.852: 99.6594% ( 1) 00:08:19.232 19055.852 - 19156.677: 99.6864% ( 5) 00:08:19.232 19156.677 - 19257.502: 99.7135% ( 5) 00:08:19.232 19257.502 - 19358.326: 99.7459% ( 6) 00:08:19.232 19358.326 - 19459.151: 99.8432% ( 18) 00:08:19.232 19459.151 - 19559.975: 99.8973% ( 10) 00:08:19.232 19559.975 - 19660.800: 99.9351% ( 7) 00:08:19.232 19660.800 - 19761.625: 99.9622% ( 5) 00:08:19.232 19761.625 - 19862.449: 99.9892% ( 5) 00:08:19.232 19862.449 - 19963.274: 100.0000% ( 2) 00:08:19.232 00:08:19.232 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:19.232 ============================================================================== 00:08:19.232 Range in us Cumulative IO count 00:08:19.232 3453.243 - 3478.449: 0.0054% ( 1) 00:08:19.232 3478.449 - 3503.655: 0.0162% ( 2) 00:08:19.232 3503.655 - 3528.862: 0.0378% ( 4) 00:08:19.232 3528.862 - 3554.068: 0.0919% ( 10) 00:08:19.232 3554.068 - 3579.274: 0.1622% ( 13) 00:08:19.232 3579.274 - 3604.480: 0.2433% ( 15) 00:08:19.232 3604.480 - 3629.686: 0.2649% ( 4) 00:08:19.232 3629.686 - 3654.892: 0.2757% ( 2) 00:08:19.232 3654.892 - 3680.098: 0.2865% ( 2) 00:08:19.232 3680.098 - 3705.305: 0.2974% ( 2) 00:08:19.232 3705.305 - 3730.511: 0.3136% ( 3) 00:08:19.232 3730.511 - 3755.717: 0.3244% ( 2) 00:08:19.232 3755.717 - 3780.923: 0.3406% ( 3) 00:08:19.232 3780.923 - 3806.129: 0.3460% ( 1) 00:08:19.232 5469.735 - 5494.942: 0.3514% ( 1) 00:08:19.232 5494.942 - 5520.148: 0.3568% ( 1) 00:08:19.232 5520.148 - 5545.354: 0.3731% ( 3) 00:08:19.232 5545.354 - 5570.560: 0.3785% ( 1) 00:08:19.232 5570.560 - 5595.766: 0.4001% ( 4) 00:08:19.232 5595.766 - 5620.972: 0.4109% ( 2) 00:08:19.232 5620.972 - 5646.178: 0.4487% ( 7) 00:08:19.232 5646.178 - 5671.385: 0.4812% ( 6) 00:08:19.232 5671.385 - 5696.591: 0.5515% ( 13) 00:08:19.232 5696.591 - 5721.797: 0.6272% ( 14) 00:08:19.232 5721.797 - 5747.003: 0.7137% ( 16) 00:08:19.232 5747.003 - 5772.209: 0.8542% ( 26) 00:08:19.232 5772.209 - 5797.415: 0.9732% ( 22) 00:08:19.232 5797.415 - 5822.622: 1.1894% ( 40) 00:08:19.232 5822.622 - 5847.828: 1.4165% ( 42) 00:08:19.232 5847.828 - 5873.034: 1.6652% ( 46) 00:08:19.232 5873.034 - 5898.240: 1.9518% ( 53) 00:08:19.232 5898.240 - 5923.446: 2.1518% ( 37) 00:08:19.232 5923.446 - 5948.652: 2.4438% ( 54) 00:08:19.232 5948.652 - 5973.858: 2.7195% ( 51) 00:08:19.232 5973.858 - 5999.065: 2.9682% ( 46) 00:08:19.232 5999.065 - 6024.271: 3.3196% ( 65) 00:08:19.232 6024.271 - 6049.477: 3.6170% ( 55) 00:08:19.232 6049.477 - 6074.683: 3.9035% ( 53) 00:08:19.232 6074.683 - 6099.889: 4.3415% ( 81) 00:08:19.232 6099.889 - 6125.095: 4.8227% ( 89) 00:08:19.232 6125.095 - 6150.302: 5.3741% ( 102) 00:08:19.232 6150.302 - 6175.508: 6.0013% ( 116) 00:08:19.232 6175.508 - 6200.714: 6.7204% ( 133) 00:08:19.232 6200.714 - 6225.920: 7.6611% ( 174) 00:08:19.232 6225.920 - 6251.126: 8.6019% ( 174) 00:08:19.232 6251.126 - 6276.332: 10.0238% ( 263) 00:08:19.232 6276.332 - 6301.538: 11.5971% ( 291) 00:08:19.232 6301.538 - 6326.745: 13.8949% ( 425) 00:08:19.232 6326.745 - 6351.951: 16.1765% ( 422) 00:08:19.232 6351.951 - 6377.157: 18.4364% ( 418) 00:08:19.232 6377.157 - 6402.363: 20.8478% ( 446) 00:08:19.232 6402.363 - 6427.569: 23.5078% ( 492) 00:08:19.232 6427.569 - 6452.775: 26.8112% ( 611) 00:08:19.232 6452.775 - 6503.188: 33.6830% ( 1271) 00:08:19.232 6503.188 - 6553.600: 41.4846% ( 1443) 00:08:19.232 6553.600 - 6604.012: 49.7297% ( 1525) 00:08:19.232 6604.012 - 6654.425: 56.0500% ( 1169) 00:08:19.232 6654.425 - 6704.837: 61.3538% ( 981) 00:08:19.232 6704.837 - 6755.249: 66.3549% ( 925) 00:08:19.232 6755.249 - 6805.662: 69.2636% ( 538) 00:08:19.232 6805.662 - 6856.074: 72.9779% ( 687) 00:08:19.232 6856.074 - 6906.486: 75.7840% ( 519) 00:08:19.232 6906.486 - 6956.898: 77.4762% ( 313) 00:08:19.232 6956.898 - 7007.311: 78.9576% ( 274) 00:08:19.232 7007.311 - 7057.723: 80.3363% ( 255) 00:08:19.232 7057.723 - 7108.135: 81.3095% ( 180) 00:08:19.232 7108.135 - 7158.548: 82.0556% ( 138) 00:08:19.232 7158.548 - 7208.960: 83.2396% ( 219) 00:08:19.232 7208.960 - 7259.372: 83.9803% ( 137) 00:08:19.232 7259.372 - 7309.785: 84.4453% ( 86) 00:08:19.232 7309.785 - 7360.197: 84.8616% ( 77) 00:08:19.232 7360.197 - 7410.609: 85.3590% ( 92) 00:08:19.232 7410.609 - 7461.022: 85.8077% ( 83) 00:08:19.232 7461.022 - 7511.434: 86.6782% ( 161) 00:08:19.232 7511.434 - 7561.846: 87.0945% ( 77) 00:08:19.232 7561.846 - 7612.258: 87.5324% ( 81) 00:08:19.232 7612.258 - 7662.671: 87.8731% ( 63) 00:08:19.232 7662.671 - 7713.083: 88.4137% ( 100) 00:08:19.232 7713.083 - 7763.495: 88.8516% ( 81) 00:08:19.232 7763.495 - 7813.908: 89.3923% ( 100) 00:08:19.232 7813.908 - 7864.320: 90.0357% ( 119) 00:08:19.232 7864.320 - 7914.732: 90.7602% ( 134) 00:08:19.232 7914.732 - 7965.145: 91.3333% ( 106) 00:08:19.232 7965.145 - 8015.557: 91.9496% ( 114) 00:08:19.232 8015.557 - 8065.969: 92.3605% ( 76) 00:08:19.232 8065.969 - 8116.382: 93.0580% ( 129) 00:08:19.232 8116.382 - 8166.794: 93.3715% ( 58) 00:08:19.232 8166.794 - 8217.206: 93.5716% ( 37) 00:08:19.232 8217.206 - 8267.618: 93.7933% ( 41) 00:08:19.232 8267.618 - 8318.031: 93.9609% ( 31) 00:08:19.232 8318.031 - 8368.443: 94.1501% ( 35) 00:08:19.232 8368.443 - 8418.855: 94.3988% ( 46) 00:08:19.232 8418.855 - 8469.268: 94.5610% ( 30) 00:08:19.232 8469.268 - 8519.680: 94.6962% ( 25) 00:08:19.232 8519.680 - 8570.092: 94.8151% ( 22) 00:08:19.232 8570.092 - 8620.505: 94.9232% ( 20) 00:08:19.232 8620.505 - 8670.917: 95.0043% ( 15) 00:08:19.232 8670.917 - 8721.329: 95.0584% ( 10) 00:08:19.232 8721.329 - 8771.742: 95.1125% ( 10) 00:08:19.232 8771.742 - 8822.154: 95.2530% ( 26) 00:08:19.232 8822.154 - 8872.566: 95.3936% ( 26) 00:08:19.232 8872.566 - 8922.978: 95.5396% ( 27) 00:08:19.232 8922.978 - 8973.391: 95.6423% ( 19) 00:08:19.232 8973.391 - 9023.803: 95.7667% ( 23) 00:08:19.233 9023.803 - 9074.215: 95.8856% ( 22) 00:08:19.233 9074.215 - 9124.628: 96.1127% ( 42) 00:08:19.233 9124.628 - 9175.040: 96.2424% ( 24) 00:08:19.233 9175.040 - 9225.452: 96.3776% ( 25) 00:08:19.233 9225.452 - 9275.865: 96.6101% ( 43) 00:08:19.233 9275.865 - 9326.277: 96.7290% ( 22) 00:08:19.233 9326.277 - 9376.689: 96.7993% ( 13) 00:08:19.233 9376.689 - 9427.102: 96.9020% ( 19) 00:08:19.233 9427.102 - 9477.514: 97.0264% ( 23) 00:08:19.233 9477.514 - 9527.926: 97.2372% ( 39) 00:08:19.233 9527.926 - 9578.338: 97.4697% ( 43) 00:08:19.233 9578.338 - 9628.751: 97.5616% ( 17) 00:08:19.233 9628.751 - 9679.163: 97.6319% ( 13) 00:08:19.233 9679.163 - 9729.575: 97.6752% ( 8) 00:08:19.233 9729.575 - 9779.988: 97.7022% ( 5) 00:08:19.233 9779.988 - 9830.400: 97.7238% ( 4) 00:08:19.233 9830.400 - 9880.812: 97.7401% ( 3) 00:08:19.233 9880.812 - 9931.225: 97.7617% ( 4) 00:08:19.233 9931.225 - 9981.637: 97.7779% ( 3) 00:08:19.233 9981.637 - 10032.049: 97.7995% ( 4) 00:08:19.233 10032.049 - 10082.462: 97.8157% ( 3) 00:08:19.233 10082.462 - 10132.874: 97.8374% ( 4) 00:08:19.233 10132.874 - 10183.286: 97.8590% ( 4) 00:08:19.233 10183.286 - 10233.698: 97.8752% ( 3) 00:08:19.233 10233.698 - 10284.111: 97.8914% ( 3) 00:08:19.233 10284.111 - 10334.523: 97.9131% ( 4) 00:08:19.233 10334.523 - 10384.935: 97.9455% ( 6) 00:08:19.233 10384.935 - 10435.348: 97.9617% ( 3) 00:08:19.233 10435.348 - 10485.760: 97.9833% ( 4) 00:08:19.233 10485.760 - 10536.172: 98.0050% ( 4) 00:08:19.233 10536.172 - 10586.585: 98.0320% ( 5) 00:08:19.233 10586.585 - 10636.997: 98.0699% ( 7) 00:08:19.233 10636.997 - 10687.409: 98.1618% ( 17) 00:08:19.233 10687.409 - 10737.822: 98.1780% ( 3) 00:08:19.233 10737.822 - 10788.234: 98.1942% ( 3) 00:08:19.233 10788.234 - 10838.646: 98.2104% ( 3) 00:08:19.233 10838.646 - 10889.058: 98.2429% ( 6) 00:08:19.233 10889.058 - 10939.471: 98.2753% ( 6) 00:08:19.233 10939.471 - 10989.883: 98.3077% ( 6) 00:08:19.233 10989.883 - 11040.295: 98.3348% ( 5) 00:08:19.233 11040.295 - 11090.708: 98.3510% ( 3) 00:08:19.233 11090.708 - 11141.120: 98.3672% ( 3) 00:08:19.233 11141.120 - 11191.532: 98.3834% ( 3) 00:08:19.233 11191.532 - 11241.945: 98.3997% ( 3) 00:08:19.233 11241.945 - 11292.357: 98.4159% ( 3) 00:08:19.233 11292.357 - 11342.769: 98.4321% ( 3) 00:08:19.233 11342.769 - 11393.182: 98.4483% ( 3) 00:08:19.233 11393.182 - 11443.594: 98.4645% ( 3) 00:08:19.233 11443.594 - 11494.006: 98.4808% ( 3) 00:08:19.233 11494.006 - 11544.418: 98.4970% ( 3) 00:08:19.233 11544.418 - 11594.831: 98.5186% ( 4) 00:08:19.233 11594.831 - 11645.243: 98.5348% ( 3) 00:08:19.233 11645.243 - 11695.655: 98.5510% ( 3) 00:08:19.233 11695.655 - 11746.068: 98.5727% ( 4) 00:08:19.233 11746.068 - 11796.480: 98.5889% ( 3) 00:08:19.233 11796.480 - 11846.892: 98.6321% ( 8) 00:08:19.233 11846.892 - 11897.305: 98.6646% ( 6) 00:08:19.233 11897.305 - 11947.717: 98.6970% ( 6) 00:08:19.233 11947.717 - 11998.129: 98.7186% ( 4) 00:08:19.233 11998.129 - 12048.542: 98.7457% ( 5) 00:08:19.233 12048.542 - 12098.954: 98.7997% ( 10) 00:08:19.233 12098.954 - 12149.366: 98.8376% ( 7) 00:08:19.233 12149.366 - 12199.778: 98.8646% ( 5) 00:08:19.233 12199.778 - 12250.191: 98.8754% ( 2) 00:08:19.233 12250.191 - 12300.603: 98.8808% ( 1) 00:08:19.233 12300.603 - 12351.015: 98.8917% ( 2) 00:08:19.233 12351.015 - 12401.428: 98.9025% ( 2) 00:08:19.233 12401.428 - 12451.840: 98.9079% ( 1) 00:08:19.233 12451.840 - 12502.252: 98.9241% ( 3) 00:08:19.233 12502.252 - 12552.665: 98.9349% ( 2) 00:08:19.233 12552.665 - 12603.077: 98.9511% ( 3) 00:08:19.233 12603.077 - 12653.489: 98.9673% ( 3) 00:08:19.233 12653.489 - 12703.902: 98.9890% ( 4) 00:08:19.233 12703.902 - 12754.314: 99.0106% ( 4) 00:08:19.233 12754.314 - 12804.726: 99.0214% ( 2) 00:08:19.233 12804.726 - 12855.138: 99.0376% ( 3) 00:08:19.233 12855.138 - 12905.551: 99.0538% ( 3) 00:08:19.233 12905.551 - 13006.375: 99.0755% ( 4) 00:08:19.233 13006.375 - 13107.200: 99.0971% ( 4) 00:08:19.233 13107.200 - 13208.025: 99.1404% ( 8) 00:08:19.233 13208.025 - 13308.849: 99.2106% ( 13) 00:08:19.233 13308.849 - 13409.674: 99.2593% ( 9) 00:08:19.233 13409.674 - 13510.498: 99.3080% ( 9) 00:08:19.233 15526.991 - 15627.815: 99.3134% ( 1) 00:08:19.233 15728.640 - 15829.465: 99.3404% ( 5) 00:08:19.233 15829.465 - 15930.289: 99.3945% ( 10) 00:08:19.233 15930.289 - 16031.114: 99.4431% ( 9) 00:08:19.233 16031.114 - 16131.938: 99.4864% ( 8) 00:08:19.233 16131.938 - 16232.763: 99.5188% ( 6) 00:08:19.233 16232.763 - 16333.588: 99.5513% ( 6) 00:08:19.233 16333.588 - 16434.412: 99.5837% ( 6) 00:08:19.233 16434.412 - 16535.237: 99.6161% ( 6) 00:08:19.233 16535.237 - 16636.062: 99.6486% ( 6) 00:08:19.233 16636.062 - 16736.886: 99.6540% ( 1) 00:08:19.233 18955.028 - 19055.852: 99.7405% ( 16) 00:08:19.233 19055.852 - 19156.677: 99.9784% ( 44) 00:08:19.233 19156.677 - 19257.502: 99.9838% ( 1) 00:08:19.233 19559.975 - 19660.800: 100.0000% ( 3) 00:08:19.233 00:08:19.233 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:19.233 ============================================================================== 00:08:19.233 Range in us Cumulative IO count 00:08:19.233 3024.738 - 3037.342: 0.0108% ( 2) 00:08:19.233 3037.342 - 3049.945: 0.0433% ( 6) 00:08:19.233 3049.945 - 3062.548: 0.0703% ( 5) 00:08:19.233 3062.548 - 3075.151: 0.1081% ( 7) 00:08:19.233 3075.151 - 3087.754: 0.1352% ( 5) 00:08:19.233 3087.754 - 3100.357: 0.1514% ( 3) 00:08:19.233 3100.357 - 3112.960: 0.1838% ( 6) 00:08:19.233 3112.960 - 3125.563: 0.2109% ( 5) 00:08:19.233 3125.563 - 3138.166: 0.2163% ( 1) 00:08:19.233 3138.166 - 3150.769: 0.2217% ( 1) 00:08:19.233 3150.769 - 3163.372: 0.2271% ( 1) 00:08:19.233 3163.372 - 3175.975: 0.2325% ( 1) 00:08:19.233 3175.975 - 3188.578: 0.2379% ( 1) 00:08:19.233 3188.578 - 3201.182: 0.2433% ( 1) 00:08:19.233 3201.182 - 3213.785: 0.2487% ( 1) 00:08:19.233 3213.785 - 3226.388: 0.2541% ( 1) 00:08:19.233 3226.388 - 3251.594: 0.2649% ( 2) 00:08:19.233 3251.594 - 3276.800: 0.2757% ( 2) 00:08:19.233 3276.800 - 3302.006: 0.2865% ( 2) 00:08:19.233 3302.006 - 3327.212: 0.2974% ( 2) 00:08:19.233 3327.212 - 3352.418: 0.3082% ( 2) 00:08:19.233 3352.418 - 3377.625: 0.3244% ( 3) 00:08:19.233 3377.625 - 3402.831: 0.3298% ( 1) 00:08:19.233 3402.831 - 3428.037: 0.3460% ( 3) 00:08:19.233 5368.911 - 5394.117: 0.3514% ( 1) 00:08:19.233 5394.117 - 5419.323: 0.3568% ( 1) 00:08:19.233 5419.323 - 5444.529: 0.3731% ( 3) 00:08:19.233 5444.529 - 5469.735: 0.3893% ( 3) 00:08:19.233 5469.735 - 5494.942: 0.4055% ( 3) 00:08:19.233 5494.942 - 5520.148: 0.4325% ( 5) 00:08:19.233 5520.148 - 5545.354: 0.4920% ( 11) 00:08:19.233 5545.354 - 5570.560: 0.5461% ( 10) 00:08:19.233 5570.560 - 5595.766: 0.5947% ( 9) 00:08:19.233 5595.766 - 5620.972: 0.6434% ( 9) 00:08:19.233 5620.972 - 5646.178: 0.6866% ( 8) 00:08:19.233 5646.178 - 5671.385: 0.7407% ( 10) 00:08:19.233 5671.385 - 5696.591: 0.8002% ( 11) 00:08:19.233 5696.591 - 5721.797: 0.8596% ( 11) 00:08:19.233 5721.797 - 5747.003: 0.9516% ( 17) 00:08:19.233 5747.003 - 5772.209: 1.0327% ( 15) 00:08:19.233 5772.209 - 5797.415: 1.1462% ( 21) 00:08:19.233 5797.415 - 5822.622: 1.3030% ( 29) 00:08:19.233 5822.622 - 5847.828: 1.5895% ( 53) 00:08:19.233 5847.828 - 5873.034: 1.8220% ( 43) 00:08:19.233 5873.034 - 5898.240: 1.9626% ( 26) 00:08:19.233 5898.240 - 5923.446: 2.2221% ( 48) 00:08:19.233 5923.446 - 5948.652: 2.4005% ( 33) 00:08:19.233 5948.652 - 5973.858: 2.7087% ( 57) 00:08:19.233 5973.858 - 5999.065: 2.9574% ( 46) 00:08:19.233 5999.065 - 6024.271: 3.3196% ( 67) 00:08:19.233 6024.271 - 6049.477: 3.6494% ( 61) 00:08:19.233 6049.477 - 6074.683: 3.9252% ( 51) 00:08:19.233 6074.683 - 6099.889: 4.2171% ( 54) 00:08:19.233 6099.889 - 6125.095: 4.7578% ( 100) 00:08:19.233 6125.095 - 6150.302: 5.2930% ( 99) 00:08:19.233 6150.302 - 6175.508: 5.8229% ( 98) 00:08:19.233 6175.508 - 6200.714: 6.7420% ( 170) 00:08:19.233 6200.714 - 6225.920: 7.8017% ( 196) 00:08:19.233 6225.920 - 6251.126: 8.9263% ( 208) 00:08:19.233 6251.126 - 6276.332: 10.0941% ( 216) 00:08:19.233 6276.332 - 6301.538: 11.7647% ( 309) 00:08:19.233 6301.538 - 6326.745: 13.4732% ( 316) 00:08:19.233 6326.745 - 6351.951: 16.1440% ( 494) 00:08:19.233 6351.951 - 6377.157: 18.3878% ( 415) 00:08:19.233 6377.157 - 6402.363: 20.6910% ( 426) 00:08:19.233 6402.363 - 6427.569: 23.6646% ( 550) 00:08:19.233 6427.569 - 6452.775: 27.0491% ( 626) 00:08:19.233 6452.775 - 6503.188: 34.4345% ( 1366) 00:08:19.233 6503.188 - 6553.600: 41.4738% ( 1302) 00:08:19.233 6553.600 - 6604.012: 49.1620% ( 1422) 00:08:19.233 6604.012 - 6654.425: 56.1797% ( 1298) 00:08:19.233 6654.425 - 6704.837: 61.5160% ( 987) 00:08:19.233 6704.837 - 6755.249: 65.9656% ( 823) 00:08:19.233 6755.249 - 6805.662: 69.4961% ( 653) 00:08:19.233 6805.662 - 6856.074: 72.7617% ( 604) 00:08:19.233 6856.074 - 6906.486: 75.8056% ( 563) 00:08:19.233 6906.486 - 6956.898: 77.3951% ( 294) 00:08:19.233 6956.898 - 7007.311: 78.7413% ( 249) 00:08:19.233 7007.311 - 7057.723: 80.3687% ( 301) 00:08:19.233 7057.723 - 7108.135: 81.2987% ( 172) 00:08:19.233 7108.135 - 7158.548: 82.5205% ( 226) 00:08:19.233 7158.548 - 7208.960: 83.2558% ( 136) 00:08:19.233 7208.960 - 7259.372: 83.8235% ( 105) 00:08:19.233 7259.372 - 7309.785: 84.5534% ( 135) 00:08:19.233 7309.785 - 7360.197: 85.0616% ( 94) 00:08:19.234 7360.197 - 7410.609: 85.6185% ( 103) 00:08:19.234 7410.609 - 7461.022: 86.1808% ( 104) 00:08:19.234 7461.022 - 7511.434: 86.8242% ( 119) 00:08:19.234 7511.434 - 7561.846: 87.3702% ( 101) 00:08:19.234 7561.846 - 7612.258: 87.7487% ( 70) 00:08:19.234 7612.258 - 7662.671: 88.1488% ( 74) 00:08:19.234 7662.671 - 7713.083: 88.7597% ( 113) 00:08:19.234 7713.083 - 7763.495: 89.3653% ( 112) 00:08:19.234 7763.495 - 7813.908: 89.8897% ( 97) 00:08:19.234 7813.908 - 7864.320: 90.4790% ( 109) 00:08:19.234 7864.320 - 7914.732: 91.0197% ( 100) 00:08:19.234 7914.732 - 7965.145: 91.4144% ( 73) 00:08:19.234 7965.145 - 8015.557: 92.2686% ( 158) 00:08:19.234 8015.557 - 8065.969: 92.5822% ( 58) 00:08:19.234 8065.969 - 8116.382: 92.9390% ( 66) 00:08:19.234 8116.382 - 8166.794: 93.2364% ( 55) 00:08:19.234 8166.794 - 8217.206: 93.4635% ( 42) 00:08:19.234 8217.206 - 8267.618: 93.5986% ( 25) 00:08:19.234 8267.618 - 8318.031: 93.7446% ( 27) 00:08:19.234 8318.031 - 8368.443: 93.8960% ( 28) 00:08:19.234 8368.443 - 8418.855: 94.0474% ( 28) 00:08:19.234 8418.855 - 8469.268: 94.2150% ( 31) 00:08:19.234 8469.268 - 8519.680: 94.3772% ( 30) 00:08:19.234 8519.680 - 8570.092: 94.5502% ( 32) 00:08:19.234 8570.092 - 8620.505: 94.7989% ( 46) 00:08:19.234 8620.505 - 8670.917: 95.0476% ( 46) 00:08:19.234 8670.917 - 8721.329: 95.2260% ( 33) 00:08:19.234 8721.329 - 8771.742: 95.3341% ( 20) 00:08:19.234 8771.742 - 8822.154: 95.4585% ( 23) 00:08:19.234 8822.154 - 8872.566: 95.6207% ( 30) 00:08:19.234 8872.566 - 8922.978: 95.8261% ( 38) 00:08:19.234 8922.978 - 8973.391: 95.9721% ( 27) 00:08:19.234 8973.391 - 9023.803: 96.1181% ( 27) 00:08:19.234 9023.803 - 9074.215: 96.1884% ( 13) 00:08:19.234 9074.215 - 9124.628: 96.2532% ( 12) 00:08:19.234 9124.628 - 9175.040: 96.3019% ( 9) 00:08:19.234 9175.040 - 9225.452: 96.3452% ( 8) 00:08:19.234 9225.452 - 9275.865: 96.3992% ( 10) 00:08:19.234 9275.865 - 9326.277: 96.4857% ( 16) 00:08:19.234 9326.277 - 9376.689: 96.5668% ( 15) 00:08:19.234 9376.689 - 9427.102: 96.6858% ( 22) 00:08:19.234 9427.102 - 9477.514: 96.8696% ( 34) 00:08:19.234 9477.514 - 9527.926: 96.9345% ( 12) 00:08:19.234 9527.926 - 9578.338: 96.9994% ( 12) 00:08:19.234 9578.338 - 9628.751: 97.1021% ( 19) 00:08:19.234 9628.751 - 9679.163: 97.2318% ( 24) 00:08:19.234 9679.163 - 9729.575: 97.3616% ( 24) 00:08:19.234 9729.575 - 9779.988: 97.4103% ( 9) 00:08:19.234 9779.988 - 9830.400: 97.4697% ( 11) 00:08:19.234 9830.400 - 9880.812: 97.5184% ( 9) 00:08:19.234 9880.812 - 9931.225: 97.5779% ( 11) 00:08:19.234 9931.225 - 9981.637: 97.6319% ( 10) 00:08:19.234 9981.637 - 10032.049: 97.6860% ( 10) 00:08:19.234 10032.049 - 10082.462: 97.7292% ( 8) 00:08:19.234 10082.462 - 10132.874: 97.7833% ( 10) 00:08:19.234 10132.874 - 10183.286: 97.8157% ( 6) 00:08:19.234 10183.286 - 10233.698: 97.8536% ( 7) 00:08:19.234 10233.698 - 10284.111: 97.8752% ( 4) 00:08:19.234 10284.111 - 10334.523: 97.8968% ( 4) 00:08:19.234 10334.523 - 10384.935: 97.9131% ( 3) 00:08:19.234 10384.935 - 10435.348: 97.9239% ( 2) 00:08:19.234 10485.760 - 10536.172: 97.9293% ( 1) 00:08:19.234 10536.172 - 10586.585: 97.9455% ( 3) 00:08:19.234 10586.585 - 10636.997: 97.9725% ( 5) 00:08:19.234 10636.997 - 10687.409: 98.0212% ( 9) 00:08:19.234 10687.409 - 10737.822: 98.0644% ( 8) 00:08:19.234 10737.822 - 10788.234: 98.1185% ( 10) 00:08:19.234 10788.234 - 10838.646: 98.1618% ( 8) 00:08:19.234 10838.646 - 10889.058: 98.2807% ( 22) 00:08:19.234 10889.058 - 10939.471: 98.3131% ( 6) 00:08:19.234 10939.471 - 10989.883: 98.3510% ( 7) 00:08:19.234 10989.883 - 11040.295: 98.3834% ( 6) 00:08:19.234 11040.295 - 11090.708: 98.4321% ( 9) 00:08:19.234 11090.708 - 11141.120: 98.5240% ( 17) 00:08:19.234 11141.120 - 11191.532: 98.6105% ( 16) 00:08:19.234 11191.532 - 11241.945: 98.6754% ( 12) 00:08:19.234 11241.945 - 11292.357: 98.7078% ( 6) 00:08:19.234 11292.357 - 11342.769: 98.7295% ( 4) 00:08:19.234 11342.769 - 11393.182: 98.7565% ( 5) 00:08:19.234 11393.182 - 11443.594: 98.7889% ( 6) 00:08:19.234 11443.594 - 11494.006: 98.8160% ( 5) 00:08:19.234 11494.006 - 11544.418: 98.8430% ( 5) 00:08:19.234 11544.418 - 11594.831: 98.8538% ( 2) 00:08:19.234 11594.831 - 11645.243: 98.8646% ( 2) 00:08:19.234 11645.243 - 11695.655: 98.8700% ( 1) 00:08:19.234 11695.655 - 11746.068: 98.8754% ( 1) 00:08:19.234 11746.068 - 11796.480: 98.8862% ( 2) 00:08:19.234 11796.480 - 11846.892: 98.8917% ( 1) 00:08:19.234 11846.892 - 11897.305: 98.9025% ( 2) 00:08:19.234 11897.305 - 11947.717: 98.9079% ( 1) 00:08:19.234 11947.717 - 11998.129: 98.9187% ( 2) 00:08:19.234 11998.129 - 12048.542: 98.9295% ( 2) 00:08:19.234 12048.542 - 12098.954: 98.9349% ( 1) 00:08:19.234 12098.954 - 12149.366: 98.9403% ( 1) 00:08:19.234 12149.366 - 12199.778: 98.9511% ( 2) 00:08:19.234 12199.778 - 12250.191: 98.9565% ( 1) 00:08:19.234 12250.191 - 12300.603: 98.9619% ( 1) 00:08:19.234 13308.849 - 13409.674: 98.9782% ( 3) 00:08:19.234 13409.674 - 13510.498: 99.0052% ( 5) 00:08:19.234 13510.498 - 13611.323: 99.0268% ( 4) 00:08:19.234 13611.323 - 13712.148: 99.0484% ( 4) 00:08:19.234 13712.148 - 13812.972: 99.0647% ( 3) 00:08:19.234 13812.972 - 13913.797: 99.0917% ( 5) 00:08:19.234 13913.797 - 14014.622: 99.1404% ( 9) 00:08:19.234 14014.622 - 14115.446: 99.2052% ( 12) 00:08:19.234 14115.446 - 14216.271: 99.2701% ( 12) 00:08:19.234 14216.271 - 14317.095: 99.3080% ( 7) 00:08:19.234 15022.868 - 15123.692: 99.3134% ( 1) 00:08:19.234 15123.692 - 15224.517: 99.3728% ( 11) 00:08:19.234 15224.517 - 15325.342: 99.4269% ( 10) 00:08:19.234 15325.342 - 15426.166: 99.4864% ( 11) 00:08:19.234 15426.166 - 15526.991: 99.5188% ( 6) 00:08:19.234 15526.991 - 15627.815: 99.5458% ( 5) 00:08:19.234 15627.815 - 15728.640: 99.5783% ( 6) 00:08:19.234 15728.640 - 15829.465: 99.6053% ( 5) 00:08:19.234 15829.465 - 15930.289: 99.6324% ( 5) 00:08:19.234 15930.289 - 16031.114: 99.6540% ( 4) 00:08:19.234 18450.905 - 18551.729: 99.6648% ( 2) 00:08:19.234 18551.729 - 18652.554: 99.7783% ( 21) 00:08:19.234 18652.554 - 18753.378: 99.9297% ( 28) 00:08:19.234 18753.378 - 18854.203: 99.9567% ( 5) 00:08:19.234 19055.852 - 19156.677: 99.9784% ( 4) 00:08:19.234 19156.677 - 19257.502: 100.0000% ( 4) 00:08:19.234 00:08:19.234 06:42:11 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:19.234 00:08:19.234 real 0m2.457s 00:08:19.234 user 0m2.175s 00:08:19.234 sys 0m0.179s 00:08:19.234 06:42:11 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.234 ************************************ 00:08:19.234 06:42:11 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:19.234 END TEST nvme_perf 00:08:19.234 ************************************ 00:08:19.234 06:42:11 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:19.234 06:42:11 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:19.234 06:42:11 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.234 06:42:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:19.234 ************************************ 00:08:19.234 START TEST nvme_hello_world 00:08:19.234 ************************************ 00:08:19.234 06:42:12 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:19.234 Initializing NVMe Controllers 00:08:19.234 Attached to 0000:00:11.0 00:08:19.234 Namespace ID: 1 size: 5GB 00:08:19.234 Attached to 0000:00:13.0 00:08:19.234 Namespace ID: 1 size: 1GB 00:08:19.234 Attached to 0000:00:10.0 00:08:19.234 Namespace ID: 1 size: 6GB 00:08:19.234 Attached to 0000:00:12.0 00:08:19.234 Namespace ID: 1 size: 4GB 00:08:19.234 Namespace ID: 2 size: 4GB 00:08:19.234 Namespace ID: 3 size: 4GB 00:08:19.234 Initialization complete. 00:08:19.234 INFO: using host memory buffer for IO 00:08:19.234 Hello world! 00:08:19.234 INFO: using host memory buffer for IO 00:08:19.234 Hello world! 00:08:19.234 INFO: using host memory buffer for IO 00:08:19.234 Hello world! 00:08:19.234 INFO: using host memory buffer for IO 00:08:19.234 Hello world! 00:08:19.234 INFO: using host memory buffer for IO 00:08:19.234 Hello world! 00:08:19.234 INFO: using host memory buffer for IO 00:08:19.234 Hello world! 00:08:19.234 00:08:19.234 real 0m0.203s 00:08:19.234 user 0m0.071s 00:08:19.234 sys 0m0.087s 00:08:19.234 06:42:12 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.234 ************************************ 00:08:19.234 END TEST nvme_hello_world 00:08:19.234 ************************************ 00:08:19.234 06:42:12 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:19.234 06:42:12 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:19.234 06:42:12 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:19.234 06:42:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.235 06:42:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:19.235 ************************************ 00:08:19.235 START TEST nvme_sgl 00:08:19.235 ************************************ 00:08:19.235 06:42:12 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:19.496 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:19.496 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:19.496 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:19.496 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:19.496 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:19.496 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:19.496 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:19.496 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:19.496 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:19.496 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:19.496 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:19.496 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:19.496 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:19.496 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:19.496 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:19.496 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:19.496 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:19.496 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:19.496 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:19.496 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:19.496 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:19.496 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:19.496 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:19.496 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:19.496 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:19.496 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:19.496 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:19.496 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:19.496 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:19.496 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:19.496 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:19.496 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:19.496 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:19.496 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:19.496 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:19.496 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:19.496 NVMe Readv/Writev Request test 00:08:19.496 Attached to 0000:00:11.0 00:08:19.496 Attached to 0000:00:13.0 00:08:19.496 Attached to 0000:00:10.0 00:08:19.496 Attached to 0000:00:12.0 00:08:19.496 0000:00:11.0: build_io_request_2 test passed 00:08:19.496 0000:00:11.0: build_io_request_4 test passed 00:08:19.496 0000:00:11.0: build_io_request_5 test passed 00:08:19.496 0000:00:11.0: build_io_request_6 test passed 00:08:19.496 0000:00:11.0: build_io_request_7 test passed 00:08:19.496 0000:00:11.0: build_io_request_10 test passed 00:08:19.496 0000:00:10.0: build_io_request_2 test passed 00:08:19.496 0000:00:10.0: build_io_request_4 test passed 00:08:19.496 0000:00:10.0: build_io_request_5 test passed 00:08:19.496 0000:00:10.0: build_io_request_6 test passed 00:08:19.496 0000:00:10.0: build_io_request_7 test passed 00:08:19.496 0000:00:10.0: build_io_request_10 test passed 00:08:19.496 Cleaning up... 00:08:19.496 00:08:19.496 real 0m0.268s 00:08:19.496 user 0m0.129s 00:08:19.496 sys 0m0.086s 00:08:19.496 06:42:12 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.496 06:42:12 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:19.496 ************************************ 00:08:19.496 END TEST nvme_sgl 00:08:19.496 ************************************ 00:08:19.496 06:42:12 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:19.496 06:42:12 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:19.496 06:42:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.496 06:42:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:19.496 ************************************ 00:08:19.496 START TEST nvme_e2edp 00:08:19.496 ************************************ 00:08:19.496 06:42:12 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:19.757 NVMe Write/Read with End-to-End data protection test 00:08:19.757 Attached to 0000:00:11.0 00:08:19.757 Attached to 0000:00:13.0 00:08:19.757 Attached to 0000:00:10.0 00:08:19.757 Attached to 0000:00:12.0 00:08:19.757 Cleaning up... 00:08:19.757 00:08:19.757 real 0m0.198s 00:08:19.757 user 0m0.059s 00:08:19.757 sys 0m0.092s 00:08:19.757 06:42:12 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.757 06:42:12 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:19.757 ************************************ 00:08:19.757 END TEST nvme_e2edp 00:08:19.757 ************************************ 00:08:19.757 06:42:12 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:19.757 06:42:12 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:19.757 06:42:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.757 06:42:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:19.757 ************************************ 00:08:19.757 START TEST nvme_reserve 00:08:19.757 ************************************ 00:08:19.757 06:42:12 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:20.019 ===================================================== 00:08:20.019 NVMe Controller at PCI bus 0, device 17, function 0 00:08:20.019 ===================================================== 00:08:20.019 Reservations: Not Supported 00:08:20.019 ===================================================== 00:08:20.019 NVMe Controller at PCI bus 0, device 19, function 0 00:08:20.019 ===================================================== 00:08:20.019 Reservations: Not Supported 00:08:20.019 ===================================================== 00:08:20.019 NVMe Controller at PCI bus 0, device 16, function 0 00:08:20.019 ===================================================== 00:08:20.019 Reservations: Not Supported 00:08:20.019 ===================================================== 00:08:20.019 NVMe Controller at PCI bus 0, device 18, function 0 00:08:20.019 ===================================================== 00:08:20.019 Reservations: Not Supported 00:08:20.019 Reservation test passed 00:08:20.019 00:08:20.019 real 0m0.185s 00:08:20.019 user 0m0.067s 00:08:20.019 sys 0m0.078s 00:08:20.019 06:42:12 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.019 06:42:12 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:20.019 ************************************ 00:08:20.019 END TEST nvme_reserve 00:08:20.019 ************************************ 00:08:20.019 06:42:12 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:20.019 06:42:12 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:20.019 06:42:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.019 06:42:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:20.019 ************************************ 00:08:20.019 START TEST nvme_err_injection 00:08:20.019 ************************************ 00:08:20.019 06:42:12 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:20.278 NVMe Error Injection test 00:08:20.278 Attached to 0000:00:11.0 00:08:20.278 Attached to 0000:00:13.0 00:08:20.278 Attached to 0000:00:10.0 00:08:20.278 Attached to 0000:00:12.0 00:08:20.278 0000:00:11.0: get features failed as expected 00:08:20.278 0000:00:13.0: get features failed as expected 00:08:20.278 0000:00:10.0: get features failed as expected 00:08:20.278 0000:00:12.0: get features failed as expected 00:08:20.278 0000:00:11.0: get features successfully as expected 00:08:20.278 0000:00:13.0: get features successfully as expected 00:08:20.278 0000:00:10.0: get features successfully as expected 00:08:20.278 0000:00:12.0: get features successfully as expected 00:08:20.278 0000:00:11.0: read failed as expected 00:08:20.278 0000:00:13.0: read failed as expected 00:08:20.278 0000:00:10.0: read failed as expected 00:08:20.278 0000:00:12.0: read failed as expected 00:08:20.279 0000:00:11.0: read successfully as expected 00:08:20.279 0000:00:13.0: read successfully as expected 00:08:20.279 0000:00:10.0: read successfully as expected 00:08:20.279 0000:00:12.0: read successfully as expected 00:08:20.279 Cleaning up... 00:08:20.279 00:08:20.279 real 0m0.207s 00:08:20.279 user 0m0.074s 00:08:20.279 sys 0m0.089s 00:08:20.279 06:42:13 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.279 ************************************ 00:08:20.279 END TEST nvme_err_injection 00:08:20.279 ************************************ 00:08:20.279 06:42:13 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:20.279 06:42:13 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:20.279 06:42:13 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:08:20.279 06:42:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.279 06:42:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:20.279 ************************************ 00:08:20.279 START TEST nvme_overhead 00:08:20.279 ************************************ 00:08:20.279 06:42:13 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:21.652 Initializing NVMe Controllers 00:08:21.652 Attached to 0000:00:11.0 00:08:21.652 Attached to 0000:00:13.0 00:08:21.652 Attached to 0000:00:10.0 00:08:21.652 Attached to 0000:00:12.0 00:08:21.652 Initialization complete. Launching workers. 00:08:21.652 submit (in ns) avg, min, max = 11330.0, 9875.4, 198970.8 00:08:21.652 complete (in ns) avg, min, max = 7612.1, 7136.9, 1191930.8 00:08:21.652 00:08:21.652 Submit histogram 00:08:21.652 ================ 00:08:21.652 Range in us Cumulative Count 00:08:21.652 9.846 - 9.895: 0.0055% ( 1) 00:08:21.652 10.338 - 10.388: 0.0109% ( 1) 00:08:21.652 10.388 - 10.437: 0.0164% ( 1) 00:08:21.652 10.732 - 10.782: 0.0492% ( 6) 00:08:21.652 10.782 - 10.831: 0.5198% ( 86) 00:08:21.652 10.831 - 10.880: 3.7373% ( 588) 00:08:21.652 10.880 - 10.929: 15.0807% ( 2073) 00:08:21.652 10.929 - 10.978: 35.5622% ( 3743) 00:08:21.652 10.978 - 11.028: 57.0287% ( 3923) 00:08:21.652 11.028 - 11.077: 72.9193% ( 2904) 00:08:21.652 11.077 - 11.126: 80.8700% ( 1453) 00:08:21.652 11.126 - 11.175: 84.6402% ( 689) 00:08:21.652 11.175 - 11.225: 86.3475% ( 312) 00:08:21.652 11.225 - 11.274: 87.7319% ( 253) 00:08:21.652 11.274 - 11.323: 88.9466% ( 222) 00:08:21.652 11.323 - 11.372: 89.8769% ( 170) 00:08:21.652 11.372 - 11.422: 90.4679% ( 108) 00:08:21.652 11.422 - 11.471: 90.9220% ( 83) 00:08:21.652 11.471 - 11.520: 91.2285% ( 56) 00:08:21.652 11.520 - 11.569: 91.4692% ( 44) 00:08:21.652 11.569 - 11.618: 91.6170% ( 27) 00:08:21.652 11.618 - 11.668: 91.7319% ( 21) 00:08:21.652 11.668 - 11.717: 91.8741% ( 26) 00:08:21.652 11.717 - 11.766: 92.0766% ( 37) 00:08:21.652 11.766 - 11.815: 92.3557% ( 51) 00:08:21.652 11.815 - 11.865: 92.7059% ( 64) 00:08:21.652 11.865 - 11.914: 93.1108% ( 74) 00:08:21.652 11.914 - 11.963: 93.6471% ( 98) 00:08:21.652 11.963 - 12.012: 94.1286% ( 88) 00:08:21.652 12.012 - 12.062: 94.6703% ( 99) 00:08:21.652 12.062 - 12.111: 95.1518% ( 88) 00:08:21.652 12.111 - 12.160: 95.3871% ( 43) 00:08:21.652 12.160 - 12.209: 95.5841% ( 36) 00:08:21.652 12.209 - 12.258: 95.8030% ( 40) 00:08:21.652 12.258 - 12.308: 95.9289% ( 23) 00:08:21.652 12.308 - 12.357: 96.0219% ( 17) 00:08:21.652 12.357 - 12.406: 96.0547% ( 6) 00:08:21.652 12.406 - 12.455: 96.0876% ( 6) 00:08:21.652 12.455 - 12.505: 96.1149% ( 5) 00:08:21.652 12.505 - 12.554: 96.1259% ( 2) 00:08:21.652 12.554 - 12.603: 96.1423% ( 3) 00:08:21.652 12.603 - 12.702: 96.1587% ( 3) 00:08:21.652 12.702 - 12.800: 96.2845% ( 23) 00:08:21.652 12.800 - 12.898: 96.4432% ( 29) 00:08:21.652 12.898 - 12.997: 96.6347% ( 35) 00:08:21.652 12.997 - 13.095: 96.8208% ( 34) 00:08:21.652 13.095 - 13.194: 96.9685% ( 27) 00:08:21.652 13.194 - 13.292: 97.0725% ( 19) 00:08:21.652 13.292 - 13.391: 97.1218% ( 9) 00:08:21.652 13.391 - 13.489: 97.1546% ( 6) 00:08:21.652 13.489 - 13.588: 97.1874% ( 6) 00:08:21.652 13.588 - 13.686: 97.2038% ( 3) 00:08:21.652 13.686 - 13.785: 97.2312% ( 5) 00:08:21.652 13.785 - 13.883: 97.2585% ( 5) 00:08:21.652 13.883 - 13.982: 97.2859% ( 5) 00:08:21.652 13.982 - 14.080: 97.2914% ( 1) 00:08:21.652 14.080 - 14.178: 97.3078% ( 3) 00:08:21.652 14.178 - 14.277: 97.3297% ( 4) 00:08:21.652 14.277 - 14.375: 97.3680% ( 7) 00:08:21.652 14.375 - 14.474: 97.4063% ( 7) 00:08:21.652 14.474 - 14.572: 97.4555% ( 9) 00:08:21.652 14.572 - 14.671: 97.4884% ( 6) 00:08:21.652 14.671 - 14.769: 97.5267% ( 7) 00:08:21.652 14.769 - 14.868: 97.5431% ( 3) 00:08:21.652 14.868 - 14.966: 97.5705% ( 5) 00:08:21.652 14.966 - 15.065: 97.5814% ( 2) 00:08:21.652 15.065 - 15.163: 97.5978% ( 3) 00:08:21.652 15.163 - 15.262: 97.6306% ( 6) 00:08:21.652 15.262 - 15.360: 97.6525% ( 4) 00:08:21.652 15.458 - 15.557: 97.6799% ( 5) 00:08:21.652 15.557 - 15.655: 97.6908% ( 2) 00:08:21.652 15.655 - 15.754: 97.7073% ( 3) 00:08:21.652 15.754 - 15.852: 97.7127% ( 1) 00:08:21.652 15.852 - 15.951: 97.7182% ( 1) 00:08:21.652 15.951 - 16.049: 97.7237% ( 1) 00:08:21.652 16.049 - 16.148: 97.7291% ( 1) 00:08:21.652 16.148 - 16.246: 97.7401% ( 2) 00:08:21.652 16.246 - 16.345: 97.7510% ( 2) 00:08:21.652 16.345 - 16.443: 97.7729% ( 4) 00:08:21.652 16.443 - 16.542: 97.8550% ( 15) 00:08:21.652 16.542 - 16.640: 97.9425% ( 16) 00:08:21.652 16.640 - 16.738: 97.9973% ( 10) 00:08:21.652 16.738 - 16.837: 98.0903% ( 17) 00:08:21.652 16.837 - 16.935: 98.1833% ( 17) 00:08:21.652 16.935 - 17.034: 98.2654% ( 15) 00:08:21.652 17.034 - 17.132: 98.3092% ( 8) 00:08:21.652 17.132 - 17.231: 98.3529% ( 8) 00:08:21.652 17.231 - 17.329: 98.4131% ( 11) 00:08:21.652 17.329 - 17.428: 98.4733% ( 11) 00:08:21.652 17.428 - 17.526: 98.5062% ( 6) 00:08:21.652 17.526 - 17.625: 98.5171% ( 2) 00:08:21.652 17.625 - 17.723: 98.5390% ( 4) 00:08:21.652 17.723 - 17.822: 98.5663% ( 5) 00:08:21.652 17.822 - 17.920: 98.5828% ( 3) 00:08:21.652 17.920 - 18.018: 98.5937% ( 2) 00:08:21.652 18.018 - 18.117: 98.6101% ( 3) 00:08:21.652 18.117 - 18.215: 98.6265% ( 3) 00:08:21.652 18.215 - 18.314: 98.6320% ( 1) 00:08:21.652 18.314 - 18.412: 98.6430% ( 2) 00:08:21.652 18.412 - 18.511: 98.6539% ( 2) 00:08:21.652 18.511 - 18.609: 98.6594% ( 1) 00:08:21.652 18.708 - 18.806: 98.6703% ( 2) 00:08:21.652 18.806 - 18.905: 98.6813% ( 2) 00:08:21.652 18.905 - 19.003: 98.6867% ( 1) 00:08:21.652 19.102 - 19.200: 98.6922% ( 1) 00:08:21.652 19.495 - 19.594: 98.7196% ( 5) 00:08:21.652 19.594 - 19.692: 99.0096% ( 53) 00:08:21.652 19.692 - 19.791: 99.2996% ( 53) 00:08:21.652 19.791 - 19.889: 99.4583% ( 29) 00:08:21.652 19.889 - 19.988: 99.5130% ( 10) 00:08:21.652 19.988 - 20.086: 99.5622% ( 9) 00:08:21.652 20.086 - 20.185: 99.5896% ( 5) 00:08:21.652 20.185 - 20.283: 99.6115% ( 4) 00:08:21.652 20.283 - 20.382: 99.6224% ( 2) 00:08:21.652 20.382 - 20.480: 99.6498% ( 5) 00:08:21.652 20.480 - 20.578: 99.6553% ( 1) 00:08:21.652 20.578 - 20.677: 99.6717% ( 3) 00:08:21.652 20.677 - 20.775: 99.6772% ( 1) 00:08:21.652 20.775 - 20.874: 99.7045% ( 5) 00:08:21.652 21.071 - 21.169: 99.7100% ( 1) 00:08:21.652 21.169 - 21.268: 99.7155% ( 1) 00:08:21.652 21.268 - 21.366: 99.7209% ( 1) 00:08:21.652 21.563 - 21.662: 99.7264% ( 1) 00:08:21.652 21.760 - 21.858: 99.7319% ( 1) 00:08:21.652 22.055 - 22.154: 99.7428% ( 2) 00:08:21.652 22.449 - 22.548: 99.7483% ( 1) 00:08:21.652 23.138 - 23.237: 99.7538% ( 1) 00:08:21.652 23.434 - 23.532: 99.7592% ( 1) 00:08:21.652 23.631 - 23.729: 99.7647% ( 1) 00:08:21.652 23.729 - 23.828: 99.7702% ( 1) 00:08:21.652 23.926 - 24.025: 99.7756% ( 1) 00:08:21.652 24.418 - 24.517: 99.7811% ( 1) 00:08:21.652 24.517 - 24.615: 99.7866% ( 1) 00:08:21.652 24.714 - 24.812: 99.7921% ( 1) 00:08:21.652 25.009 - 25.108: 99.7975% ( 1) 00:08:21.652 25.797 - 25.994: 99.8030% ( 1) 00:08:21.652 26.978 - 27.175: 99.8085% ( 1) 00:08:21.652 28.554 - 28.751: 99.8140% ( 1) 00:08:21.652 31.114 - 31.311: 99.8413% ( 5) 00:08:21.652 31.311 - 31.508: 99.8687% ( 5) 00:08:21.652 31.508 - 31.705: 99.8851% ( 3) 00:08:21.652 31.705 - 31.902: 99.9015% ( 3) 00:08:21.652 32.295 - 32.492: 99.9070% ( 1) 00:08:21.652 32.492 - 32.689: 99.9124% ( 1) 00:08:21.652 32.689 - 32.886: 99.9179% ( 1) 00:08:21.652 33.083 - 33.280: 99.9234% ( 1) 00:08:21.652 33.871 - 34.068: 99.9289% ( 1) 00:08:21.652 34.462 - 34.658: 99.9343% ( 1) 00:08:21.652 44.308 - 44.505: 99.9398% ( 1) 00:08:21.652 45.686 - 45.883: 99.9453% ( 1) 00:08:21.652 47.655 - 47.852: 99.9508% ( 1) 00:08:21.652 50.412 - 50.806: 99.9617% ( 2) 00:08:21.652 53.169 - 53.563: 99.9672% ( 1) 00:08:21.652 54.745 - 55.138: 99.9726% ( 1) 00:08:21.652 56.320 - 56.714: 99.9836% ( 2) 00:08:21.652 76.406 - 76.800: 99.9891% ( 1) 00:08:21.652 95.311 - 95.705: 99.9945% ( 1) 00:08:21.652 198.498 - 199.286: 100.0000% ( 1) 00:08:21.652 00:08:21.652 Complete histogram 00:08:21.652 ================== 00:08:21.652 Range in us Cumulative Count 00:08:21.652 7.089 - 7.138: 0.0055% ( 1) 00:08:21.652 7.138 - 7.188: 0.2244% ( 40) 00:08:21.652 7.188 - 7.237: 4.4487% ( 772) 00:08:21.652 7.237 - 7.286: 23.5841% ( 3497) 00:08:21.652 7.286 - 7.335: 53.4227% ( 5453) 00:08:21.652 7.335 - 7.385: 75.8140% ( 4092) 00:08:21.652 7.385 - 7.434: 87.4090% ( 2119) 00:08:21.652 7.434 - 7.483: 92.0438% ( 847) 00:08:21.652 7.483 - 7.532: 94.4022% ( 431) 00:08:21.652 7.532 - 7.582: 95.6115% ( 221) 00:08:21.652 7.582 - 7.631: 96.2517% ( 117) 00:08:21.652 7.631 - 7.680: 96.6840% ( 79) 00:08:21.652 7.680 - 7.729: 96.9576% ( 50) 00:08:21.652 7.729 - 7.778: 97.0725% ( 21) 00:08:21.652 7.778 - 7.828: 97.1272% ( 10) 00:08:21.652 7.828 - 7.877: 97.1601% ( 6) 00:08:21.652 7.877 - 7.926: 97.1655% ( 1) 00:08:21.653 7.926 - 7.975: 97.1765% ( 2) 00:08:21.653 7.975 - 8.025: 97.1929% ( 3) 00:08:21.653 8.025 - 8.074: 97.2257% ( 6) 00:08:21.653 8.074 - 8.123: 97.2312% ( 1) 00:08:21.653 8.123 - 8.172: 97.2695% ( 7) 00:08:21.653 8.172 - 8.222: 97.2859% ( 3) 00:08:21.653 8.271 - 8.320: 97.2914% ( 1) 00:08:21.653 8.320 - 8.369: 97.3078% ( 3) 00:08:21.653 8.369 - 8.418: 97.3133% ( 1) 00:08:21.653 8.418 - 8.468: 97.3187% ( 1) 00:08:21.653 8.468 - 8.517: 97.3352% ( 3) 00:08:21.653 8.517 - 8.566: 97.3461% ( 2) 00:08:21.653 8.566 - 8.615: 97.3516% ( 1) 00:08:21.653 8.615 - 8.665: 97.3625% ( 2) 00:08:21.653 8.665 - 8.714: 97.3735% ( 2) 00:08:21.653 8.812 - 8.862: 97.3789% ( 1) 00:08:21.653 8.862 - 8.911: 97.3844% ( 1) 00:08:21.653 8.911 - 8.960: 97.3899% ( 1) 00:08:21.653 9.009 - 9.058: 97.3953% ( 1) 00:08:21.653 9.206 - 9.255: 97.4118% ( 3) 00:08:21.653 9.502 - 9.551: 97.4172% ( 1) 00:08:21.653 9.600 - 9.649: 97.4227% ( 1) 00:08:21.653 9.797 - 9.846: 97.4282% ( 1) 00:08:21.653 9.846 - 9.895: 97.4337% ( 1) 00:08:21.653 10.191 - 10.240: 97.4501% ( 3) 00:08:21.653 10.437 - 10.486: 97.4555% ( 1) 00:08:21.653 10.535 - 10.585: 97.4610% ( 1) 00:08:21.653 10.732 - 10.782: 97.4665% ( 1) 00:08:21.653 10.831 - 10.880: 97.4720% ( 1) 00:08:21.653 11.126 - 11.175: 97.4774% ( 1) 00:08:21.653 11.520 - 11.569: 97.4884% ( 2) 00:08:21.653 11.569 - 11.618: 97.4938% ( 1) 00:08:21.653 11.766 - 11.815: 97.4993% ( 1) 00:08:21.653 11.914 - 11.963: 97.5048% ( 1) 00:08:21.653 11.963 - 12.012: 97.5103% ( 1) 00:08:21.653 12.062 - 12.111: 97.5157% ( 1) 00:08:21.653 12.160 - 12.209: 97.5267% ( 2) 00:08:21.653 12.209 - 12.258: 97.5321% ( 1) 00:08:21.653 12.258 - 12.308: 97.5376% ( 1) 00:08:21.653 12.357 - 12.406: 97.5431% ( 1) 00:08:21.653 12.455 - 12.505: 97.5486% ( 1) 00:08:21.653 12.505 - 12.554: 97.5595% ( 2) 00:08:21.653 12.603 - 12.702: 97.5759% ( 3) 00:08:21.653 12.702 - 12.800: 97.6306% ( 10) 00:08:21.653 12.800 - 12.898: 97.6744% ( 8) 00:08:21.653 12.898 - 12.997: 97.7620% ( 16) 00:08:21.653 12.997 - 13.095: 97.8550% ( 17) 00:08:21.653 13.095 - 13.194: 97.9371% ( 15) 00:08:21.653 13.194 - 13.292: 98.0465% ( 20) 00:08:21.653 13.292 - 13.391: 98.1560% ( 20) 00:08:21.653 13.391 - 13.489: 98.2490% ( 17) 00:08:21.653 13.489 - 13.588: 98.3256% ( 14) 00:08:21.653 13.588 - 13.686: 98.4077% ( 15) 00:08:21.653 13.686 - 13.785: 98.6320% ( 41) 00:08:21.653 13.785 - 13.883: 99.1354% ( 92) 00:08:21.653 13.883 - 13.982: 99.3817% ( 45) 00:08:21.653 13.982 - 14.080: 99.4419% ( 11) 00:08:21.653 14.080 - 14.178: 99.5021% ( 11) 00:08:21.653 14.178 - 14.277: 99.5239% ( 4) 00:08:21.653 14.277 - 14.375: 99.5513% ( 5) 00:08:21.653 14.375 - 14.474: 99.5732% ( 4) 00:08:21.653 14.474 - 14.572: 99.5841% ( 2) 00:08:21.653 14.572 - 14.671: 99.5951% ( 2) 00:08:21.653 14.769 - 14.868: 99.6005% ( 1) 00:08:21.653 14.966 - 15.065: 99.6170% ( 3) 00:08:21.653 15.262 - 15.360: 99.6279% ( 2) 00:08:21.653 15.458 - 15.557: 99.6334% ( 1) 00:08:21.653 15.557 - 15.655: 99.6389% ( 1) 00:08:21.653 16.148 - 16.246: 99.6443% ( 1) 00:08:21.653 16.246 - 16.345: 99.6498% ( 1) 00:08:21.653 16.345 - 16.443: 99.6553% ( 1) 00:08:21.653 16.542 - 16.640: 99.6607% ( 1) 00:08:21.653 17.034 - 17.132: 99.6662% ( 1) 00:08:21.653 17.428 - 17.526: 99.6717% ( 1) 00:08:21.653 17.723 - 17.822: 99.6772% ( 1) 00:08:21.653 17.920 - 18.018: 99.6826% ( 1) 00:08:21.653 18.018 - 18.117: 99.6936% ( 2) 00:08:21.653 19.003 - 19.102: 99.6990% ( 1) 00:08:21.653 19.200 - 19.298: 99.7045% ( 1) 00:08:21.653 19.298 - 19.397: 99.7100% ( 1) 00:08:21.653 19.594 - 19.692: 99.7155% ( 1) 00:08:21.653 19.889 - 19.988: 99.7209% ( 1) 00:08:21.653 20.283 - 20.382: 99.7319% ( 2) 00:08:21.653 20.382 - 20.480: 99.7373% ( 1) 00:08:21.653 21.169 - 21.268: 99.7428% ( 1) 00:08:21.653 21.662 - 21.760: 99.7483% ( 1) 00:08:21.653 21.957 - 22.055: 99.7702% ( 4) 00:08:21.653 22.055 - 22.154: 99.7811% ( 2) 00:08:21.653 22.154 - 22.252: 99.8140% ( 6) 00:08:21.653 22.252 - 22.351: 99.8413% ( 5) 00:08:21.653 22.351 - 22.449: 99.8577% ( 3) 00:08:21.653 22.449 - 22.548: 99.8632% ( 1) 00:08:21.653 22.745 - 22.843: 99.8687% ( 1) 00:08:21.653 22.942 - 23.040: 99.8741% ( 1) 00:08:21.653 23.926 - 24.025: 99.8796% ( 1) 00:08:21.653 24.320 - 24.418: 99.8851% ( 1) 00:08:21.653 24.517 - 24.615: 99.8906% ( 1) 00:08:21.653 24.615 - 24.714: 99.8960% ( 1) 00:08:21.653 25.206 - 25.403: 99.9015% ( 1) 00:08:21.653 25.797 - 25.994: 99.9070% ( 1) 00:08:21.653 25.994 - 26.191: 99.9124% ( 1) 00:08:21.653 27.766 - 27.963: 99.9234% ( 2) 00:08:21.653 28.948 - 29.145: 99.9289% ( 1) 00:08:21.653 30.129 - 30.326: 99.9343% ( 1) 00:08:21.653 34.265 - 34.462: 99.9398% ( 1) 00:08:21.653 35.446 - 35.643: 99.9453% ( 1) 00:08:21.653 43.914 - 44.111: 99.9508% ( 1) 00:08:21.653 45.686 - 45.883: 99.9562% ( 1) 00:08:21.653 47.458 - 47.655: 99.9617% ( 1) 00:08:21.653 50.412 - 50.806: 99.9726% ( 2) 00:08:21.653 54.745 - 55.138: 99.9836% ( 2) 00:08:21.653 60.258 - 60.652: 99.9891% ( 1) 00:08:21.653 66.166 - 66.560: 99.9945% ( 1) 00:08:21.653 1190.991 - 1197.292: 100.0000% ( 1) 00:08:21.653 00:08:21.653 00:08:21.653 real 0m1.197s 00:08:21.653 user 0m1.056s 00:08:21.653 sys 0m0.094s 00:08:21.653 06:42:14 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.653 ************************************ 00:08:21.653 END TEST nvme_overhead 00:08:21.653 ************************************ 00:08:21.653 06:42:14 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:21.653 06:42:14 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:21.653 06:42:14 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:21.653 06:42:14 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.653 06:42:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:21.653 ************************************ 00:08:21.653 START TEST nvme_arbitration 00:08:21.653 ************************************ 00:08:21.653 06:42:14 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:24.932 Initializing NVMe Controllers 00:08:24.932 Attached to 0000:00:11.0 00:08:24.932 Attached to 0000:00:13.0 00:08:24.932 Attached to 0000:00:10.0 00:08:24.932 Attached to 0000:00:12.0 00:08:24.932 Associating QEMU NVMe Ctrl (12341 ) with lcore 0 00:08:24.932 Associating QEMU NVMe Ctrl (12343 ) with lcore 1 00:08:24.932 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:08:24.932 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:24.932 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:24.932 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:24.932 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:24.932 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:24.932 Initialization complete. Launching workers. 00:08:24.932 Starting thread on core 1 with urgent priority queue 00:08:24.932 Starting thread on core 2 with urgent priority queue 00:08:24.932 Starting thread on core 3 with urgent priority queue 00:08:24.932 Starting thread on core 0 with urgent priority queue 00:08:24.932 QEMU NVMe Ctrl (12341 ) core 0: 6378.67 IO/s 15.68 secs/100000 ios 00:08:24.932 QEMU NVMe Ctrl (12342 ) core 0: 6378.67 IO/s 15.68 secs/100000 ios 00:08:24.932 QEMU NVMe Ctrl (12343 ) core 1: 6272.00 IO/s 15.94 secs/100000 ios 00:08:24.932 QEMU NVMe Ctrl (12342 ) core 1: 6272.00 IO/s 15.94 secs/100000 ios 00:08:24.932 QEMU NVMe Ctrl (12340 ) core 2: 5952.00 IO/s 16.80 secs/100000 ios 00:08:24.932 QEMU NVMe Ctrl (12342 ) core 3: 6080.00 IO/s 16.45 secs/100000 ios 00:08:24.932 ======================================================== 00:08:24.932 00:08:24.932 00:08:24.932 real 0m3.222s 00:08:24.932 user 0m9.005s 00:08:24.932 sys 0m0.110s 00:08:24.932 ************************************ 00:08:24.932 END TEST nvme_arbitration 00:08:24.932 ************************************ 00:08:24.932 06:42:17 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.932 06:42:17 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:24.932 06:42:17 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:24.932 06:42:17 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:24.932 06:42:17 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.932 06:42:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.932 ************************************ 00:08:24.932 START TEST nvme_single_aen 00:08:24.932 ************************************ 00:08:24.932 06:42:17 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:24.932 Asynchronous Event Request test 00:08:24.932 Attached to 0000:00:11.0 00:08:24.932 Attached to 0000:00:13.0 00:08:24.932 Attached to 0000:00:10.0 00:08:24.932 Attached to 0000:00:12.0 00:08:24.932 Reset controller to setup AER completions for this process 00:08:24.932 Registering asynchronous event callbacks... 00:08:24.932 Getting orig temperature thresholds of all controllers 00:08:24.932 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:24.932 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:24.932 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:24.932 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:24.932 Setting all controllers temperature threshold low to trigger AER 00:08:24.932 Waiting for all controllers temperature threshold to be set lower 00:08:24.932 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:24.932 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:24.932 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:24.932 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:24.932 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:24.932 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:24.932 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:24.932 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:24.932 Waiting for all controllers to trigger AER and reset threshold 00:08:24.932 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:24.932 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:24.932 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:24.932 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:24.932 Cleaning up... 00:08:24.932 00:08:24.932 real 0m0.209s 00:08:24.932 user 0m0.077s 00:08:24.932 sys 0m0.088s 00:08:24.932 ************************************ 00:08:24.932 END TEST nvme_single_aen 00:08:24.932 ************************************ 00:08:24.932 06:42:17 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.932 06:42:17 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:24.932 06:42:18 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:24.932 06:42:18 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:24.932 06:42:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.932 06:42:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:25.191 ************************************ 00:08:25.191 START TEST nvme_doorbell_aers 00:08:25.191 ************************************ 00:08:25.191 06:42:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:08:25.191 06:42:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:25.191 06:42:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:25.191 06:42:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:25.191 06:42:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:25.191 06:42:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:25.191 06:42:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:08:25.191 06:42:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:25.191 06:42:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:25.191 06:42:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:25.191 06:42:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:25.191 06:42:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:25.191 06:42:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:25.191 06:42:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:25.191 [2024-11-18 06:42:18.269546] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75037) is not found. Dropping the request. 00:08:35.162 Executing: test_write_invalid_db 00:08:35.162 Waiting for AER completion... 00:08:35.163 Failure: test_write_invalid_db 00:08:35.163 00:08:35.163 Executing: test_invalid_db_write_overflow_sq 00:08:35.163 Waiting for AER completion... 00:08:35.163 Failure: test_invalid_db_write_overflow_sq 00:08:35.163 00:08:35.163 Executing: test_invalid_db_write_overflow_cq 00:08:35.163 Waiting for AER completion... 00:08:35.163 Failure: test_invalid_db_write_overflow_cq 00:08:35.163 00:08:35.163 06:42:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:35.163 06:42:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:35.421 [2024-11-18 06:42:28.296341] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75037) is not found. Dropping the request. 00:08:45.385 Executing: test_write_invalid_db 00:08:45.385 Waiting for AER completion... 00:08:45.385 Failure: test_write_invalid_db 00:08:45.385 00:08:45.385 Executing: test_invalid_db_write_overflow_sq 00:08:45.385 Waiting for AER completion... 00:08:45.385 Failure: test_invalid_db_write_overflow_sq 00:08:45.385 00:08:45.385 Executing: test_invalid_db_write_overflow_cq 00:08:45.385 Waiting for AER completion... 00:08:45.385 Failure: test_invalid_db_write_overflow_cq 00:08:45.385 00:08:45.385 06:42:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:45.385 06:42:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:45.385 [2024-11-18 06:42:38.328853] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75037) is not found. Dropping the request. 00:08:55.353 Executing: test_write_invalid_db 00:08:55.353 Waiting for AER completion... 00:08:55.353 Failure: test_write_invalid_db 00:08:55.353 00:08:55.353 Executing: test_invalid_db_write_overflow_sq 00:08:55.353 Waiting for AER completion... 00:08:55.353 Failure: test_invalid_db_write_overflow_sq 00:08:55.353 00:08:55.353 Executing: test_invalid_db_write_overflow_cq 00:08:55.353 Waiting for AER completion... 00:08:55.353 Failure: test_invalid_db_write_overflow_cq 00:08:55.353 00:08:55.353 06:42:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:55.353 06:42:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:55.353 [2024-11-18 06:42:48.356030] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75037) is not found. Dropping the request. 00:09:05.340 Executing: test_write_invalid_db 00:09:05.340 Waiting for AER completion... 00:09:05.340 Failure: test_write_invalid_db 00:09:05.340 00:09:05.340 Executing: test_invalid_db_write_overflow_sq 00:09:05.340 Waiting for AER completion... 00:09:05.340 Failure: test_invalid_db_write_overflow_sq 00:09:05.340 00:09:05.340 Executing: test_invalid_db_write_overflow_cq 00:09:05.340 Waiting for AER completion... 00:09:05.340 Failure: test_invalid_db_write_overflow_cq 00:09:05.340 00:09:05.340 00:09:05.340 real 0m40.184s 00:09:05.340 user 0m34.167s 00:09:05.340 sys 0m5.677s 00:09:05.340 06:42:58 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.340 06:42:58 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:05.340 ************************************ 00:09:05.340 END TEST nvme_doorbell_aers 00:09:05.340 ************************************ 00:09:05.340 06:42:58 nvme -- nvme/nvme.sh@97 -- # uname 00:09:05.340 06:42:58 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:05.340 06:42:58 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:05.340 06:42:58 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:05.340 06:42:58 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.340 06:42:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:05.340 ************************************ 00:09:05.340 START TEST nvme_multi_aen 00:09:05.340 ************************************ 00:09:05.340 06:42:58 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:05.598 [2024-11-18 06:42:58.427375] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75037) is not found. Dropping the request. 00:09:05.598 [2024-11-18 06:42:58.427432] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75037) is not found. Dropping the request. 00:09:05.598 [2024-11-18 06:42:58.427443] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75037) is not found. Dropping the request. 00:09:05.598 [2024-11-18 06:42:58.428578] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75037) is not found. Dropping the request. 00:09:05.598 [2024-11-18 06:42:58.428607] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75037) is not found. Dropping the request. 00:09:05.598 [2024-11-18 06:42:58.428615] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75037) is not found. Dropping the request. 00:09:05.598 [2024-11-18 06:42:58.429551] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75037) is not found. Dropping the request. 00:09:05.598 [2024-11-18 06:42:58.429576] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75037) is not found. Dropping the request. 00:09:05.598 [2024-11-18 06:42:58.429583] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75037) is not found. Dropping the request. 00:09:05.598 [2024-11-18 06:42:58.430485] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75037) is not found. Dropping the request. 00:09:05.598 [2024-11-18 06:42:58.430512] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75037) is not found. Dropping the request. 00:09:05.598 [2024-11-18 06:42:58.430518] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75037) is not found. Dropping the request. 00:09:05.598 Child process pid: 75558 00:09:05.598 [Child] Asynchronous Event Request test 00:09:05.598 [Child] Attached to 0000:00:11.0 00:09:05.598 [Child] Attached to 0000:00:13.0 00:09:05.598 [Child] Attached to 0000:00:10.0 00:09:05.598 [Child] Attached to 0000:00:12.0 00:09:05.598 [Child] Registering asynchronous event callbacks... 00:09:05.598 [Child] Getting orig temperature thresholds of all controllers 00:09:05.598 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:05.598 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:05.598 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:05.598 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:05.598 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:05.598 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:05.598 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:05.598 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:05.599 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:05.599 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:05.599 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:05.599 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:05.599 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:05.599 [Child] Cleaning up... 00:09:05.599 Asynchronous Event Request test 00:09:05.599 Attached to 0000:00:11.0 00:09:05.599 Attached to 0000:00:13.0 00:09:05.599 Attached to 0000:00:10.0 00:09:05.599 Attached to 0000:00:12.0 00:09:05.599 Reset controller to setup AER completions for this process 00:09:05.599 Registering asynchronous event callbacks... 00:09:05.599 Getting orig temperature thresholds of all controllers 00:09:05.599 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:05.599 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:05.599 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:05.599 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:05.599 Setting all controllers temperature threshold low to trigger AER 00:09:05.599 Waiting for all controllers temperature threshold to be set lower 00:09:05.599 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:05.599 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:05.599 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:05.599 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:05.599 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:05.599 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:05.599 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:05.599 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:05.599 Waiting for all controllers to trigger AER and reset threshold 00:09:05.599 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:05.599 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:05.599 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:05.599 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:05.599 Cleaning up... 00:09:05.599 00:09:05.599 real 0m0.409s 00:09:05.599 user 0m0.136s 00:09:05.599 sys 0m0.164s 00:09:05.599 06:42:58 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.599 ************************************ 00:09:05.599 END TEST nvme_multi_aen 00:09:05.599 ************************************ 00:09:05.599 06:42:58 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:05.857 06:42:58 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:05.857 06:42:58 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:05.857 06:42:58 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.857 06:42:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:05.857 ************************************ 00:09:05.857 START TEST nvme_startup 00:09:05.857 ************************************ 00:09:05.857 06:42:58 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:05.857 Initializing NVMe Controllers 00:09:05.857 Attached to 0000:00:11.0 00:09:05.857 Attached to 0000:00:13.0 00:09:05.857 Attached to 0000:00:10.0 00:09:05.857 Attached to 0000:00:12.0 00:09:05.857 Initialization complete. 00:09:05.857 Time used:136993.797 (us). 00:09:05.857 ************************************ 00:09:05.857 END TEST nvme_startup 00:09:05.857 ************************************ 00:09:05.857 00:09:05.857 real 0m0.191s 00:09:05.857 user 0m0.055s 00:09:05.857 sys 0m0.093s 00:09:05.857 06:42:58 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.857 06:42:58 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:05.857 06:42:58 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:05.857 06:42:58 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:05.857 06:42:58 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.857 06:42:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:05.857 ************************************ 00:09:05.857 START TEST nvme_multi_secondary 00:09:05.857 ************************************ 00:09:05.857 06:42:58 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:09:05.857 06:42:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=75609 00:09:05.857 06:42:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:05.857 06:42:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=75610 00:09:05.857 06:42:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:05.857 06:42:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:09.141 Initializing NVMe Controllers 00:09:09.141 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:09.141 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:09.141 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:09.141 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:09.141 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:09.141 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:09.141 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:09.141 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:09.141 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:09.141 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:09.141 Initialization complete. Launching workers. 00:09:09.141 ======================================================== 00:09:09.141 Latency(us) 00:09:09.141 Device Information : IOPS MiB/s Average min max 00:09:09.141 PCIE (0000:00:11.0) NSID 1 from core 2: 3146.56 12.29 5084.58 875.93 12219.17 00:09:09.141 PCIE (0000:00:13.0) NSID 1 from core 2: 3146.56 12.29 5087.42 854.48 12214.04 00:09:09.141 PCIE (0000:00:10.0) NSID 1 from core 2: 3146.56 12.29 5087.02 848.16 12514.55 00:09:09.141 PCIE (0000:00:12.0) NSID 1 from core 2: 3146.56 12.29 5088.11 851.70 12997.87 00:09:09.141 PCIE (0000:00:12.0) NSID 2 from core 2: 3146.56 12.29 5088.18 897.98 13016.50 00:09:09.141 PCIE (0000:00:12.0) NSID 3 from core 2: 3146.56 12.29 5087.85 880.76 12326.53 00:09:09.141 ======================================================== 00:09:09.141 Total : 18879.33 73.75 5087.19 848.16 13016.50 00:09:09.141 00:09:09.141 06:43:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 75609 00:09:09.399 Initializing NVMe Controllers 00:09:09.399 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:09.399 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:09.399 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:09.399 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:09.399 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:09.399 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:09.399 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:09.399 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:09.399 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:09.399 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:09.399 Initialization complete. Launching workers. 00:09:09.399 ======================================================== 00:09:09.399 Latency(us) 00:09:09.399 Device Information : IOPS MiB/s Average min max 00:09:09.399 PCIE (0000:00:11.0) NSID 1 from core 1: 7560.21 29.53 2115.90 1090.46 6118.90 00:09:09.399 PCIE (0000:00:13.0) NSID 1 from core 1: 7560.21 29.53 2115.94 1043.69 6231.16 00:09:09.399 PCIE (0000:00:10.0) NSID 1 from core 1: 7560.21 29.53 2114.98 1032.04 6418.83 00:09:09.399 PCIE (0000:00:12.0) NSID 1 from core 1: 7560.21 29.53 2115.87 1093.73 6436.48 00:09:09.400 PCIE (0000:00:12.0) NSID 2 from core 1: 7560.21 29.53 2115.94 1055.48 6585.24 00:09:09.400 PCIE (0000:00:12.0) NSID 3 from core 1: 7560.21 29.53 2115.91 1043.28 5837.78 00:09:09.400 ======================================================== 00:09:09.400 Total : 45361.24 177.19 2115.76 1032.04 6585.24 00:09:09.400 00:09:11.303 Initializing NVMe Controllers 00:09:11.303 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:11.303 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:11.303 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:11.303 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:11.303 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:11.303 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:11.303 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:11.303 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:11.303 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:11.303 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:11.303 Initialization complete. Launching workers. 00:09:11.303 ======================================================== 00:09:11.303 Latency(us) 00:09:11.303 Device Information : IOPS MiB/s Average min max 00:09:11.303 PCIE (0000:00:11.0) NSID 1 from core 0: 10532.75 41.14 1518.69 699.44 6349.71 00:09:11.303 PCIE (0000:00:13.0) NSID 1 from core 0: 10532.95 41.14 1518.65 693.34 5940.58 00:09:11.303 PCIE (0000:00:10.0) NSID 1 from core 0: 10532.95 41.14 1517.78 674.58 5810.06 00:09:11.303 PCIE (0000:00:12.0) NSID 1 from core 0: 10532.95 41.14 1518.61 621.25 6044.42 00:09:11.303 PCIE (0000:00:12.0) NSID 2 from core 0: 10532.95 41.14 1518.59 450.85 6278.74 00:09:11.303 PCIE (0000:00:12.0) NSID 3 from core 0: 10532.95 41.14 1518.57 365.60 6521.27 00:09:11.303 ======================================================== 00:09:11.303 Total : 63197.50 246.87 1518.48 365.60 6521.27 00:09:11.303 00:09:11.303 06:43:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 75610 00:09:11.303 06:43:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=75684 00:09:11.303 06:43:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=75685 00:09:11.303 06:43:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:11.303 06:43:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:11.303 06:43:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:14.607 Initializing NVMe Controllers 00:09:14.607 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:14.607 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:14.607 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:14.607 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:14.607 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:14.607 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:14.607 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:14.607 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:14.607 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:14.607 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:14.607 Initialization complete. Launching workers. 00:09:14.607 ======================================================== 00:09:14.607 Latency(us) 00:09:14.607 Device Information : IOPS MiB/s Average min max 00:09:14.607 PCIE (0000:00:11.0) NSID 1 from core 1: 5487.91 21.44 2914.87 769.00 7262.65 00:09:14.607 PCIE (0000:00:13.0) NSID 1 from core 1: 5487.91 21.44 2914.87 779.57 6376.81 00:09:14.607 PCIE (0000:00:10.0) NSID 1 from core 1: 5487.91 21.44 2913.55 739.55 6958.71 00:09:14.607 PCIE (0000:00:12.0) NSID 1 from core 1: 5487.91 21.44 2914.58 763.41 6165.18 00:09:14.607 PCIE (0000:00:12.0) NSID 2 from core 1: 5487.91 21.44 2914.53 776.55 6284.99 00:09:14.607 PCIE (0000:00:12.0) NSID 3 from core 1: 5487.91 21.44 2914.65 769.28 7172.18 00:09:14.607 ======================================================== 00:09:14.607 Total : 32927.48 128.62 2914.51 739.55 7262.65 00:09:14.607 00:09:14.607 Initializing NVMe Controllers 00:09:14.607 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:14.607 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:14.607 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:14.607 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:14.607 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:14.607 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:14.607 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:14.607 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:14.607 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:14.607 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:14.607 Initialization complete. Launching workers. 00:09:14.607 ======================================================== 00:09:14.607 Latency(us) 00:09:14.607 Device Information : IOPS MiB/s Average min max 00:09:14.607 PCIE (0000:00:11.0) NSID 1 from core 0: 5402.06 21.10 2961.32 1095.67 6726.63 00:09:14.607 PCIE (0000:00:13.0) NSID 1 from core 0: 5402.06 21.10 2961.31 1034.67 7219.55 00:09:14.607 PCIE (0000:00:10.0) NSID 1 from core 0: 5402.06 21.10 2960.13 844.98 7611.10 00:09:14.607 PCIE (0000:00:12.0) NSID 1 from core 0: 5402.06 21.10 2961.13 776.09 7037.97 00:09:14.607 PCIE (0000:00:12.0) NSID 2 from core 0: 5402.06 21.10 2961.04 634.46 6739.92 00:09:14.607 PCIE (0000:00:12.0) NSID 3 from core 0: 5402.06 21.10 2960.95 537.90 7221.83 00:09:14.607 ======================================================== 00:09:14.607 Total : 32412.36 126.61 2960.98 537.90 7611.10 00:09:14.607 00:09:16.519 Initializing NVMe Controllers 00:09:16.519 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:16.519 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:16.519 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:16.519 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:16.519 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:16.519 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:16.519 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:16.519 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:16.519 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:16.519 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:16.519 Initialization complete. Launching workers. 00:09:16.519 ======================================================== 00:09:16.519 Latency(us) 00:09:16.519 Device Information : IOPS MiB/s Average min max 00:09:16.519 PCIE (0000:00:11.0) NSID 1 from core 2: 3434.32 13.42 4658.46 1020.95 12887.71 00:09:16.519 PCIE (0000:00:13.0) NSID 1 from core 2: 3434.32 13.42 4657.98 1035.42 13171.86 00:09:16.519 PCIE (0000:00:10.0) NSID 1 from core 2: 3434.32 13.42 4656.64 996.58 13584.21 00:09:16.519 PCIE (0000:00:12.0) NSID 1 from core 2: 3434.32 13.42 4658.24 996.69 14522.77 00:09:16.519 PCIE (0000:00:12.0) NSID 2 from core 2: 3434.32 13.42 4658.14 816.87 13777.91 00:09:16.519 PCIE (0000:00:12.0) NSID 3 from core 2: 3434.32 13.42 4658.03 627.80 13376.06 00:09:16.519 ======================================================== 00:09:16.519 Total : 20605.93 80.49 4657.91 627.80 14522.77 00:09:16.519 00:09:16.519 06:43:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 75684 00:09:16.519 06:43:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 75685 00:09:16.519 00:09:16.519 real 0m10.676s 00:09:16.519 user 0m18.321s 00:09:16.519 sys 0m0.547s 00:09:16.519 06:43:09 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.519 06:43:09 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:16.519 ************************************ 00:09:16.519 END TEST nvme_multi_secondary 00:09:16.519 ************************************ 00:09:16.780 06:43:09 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:16.780 06:43:09 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:16.780 06:43:09 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/74634 ]] 00:09:16.780 06:43:09 nvme -- common/autotest_common.sh@1094 -- # kill 74634 00:09:16.780 06:43:09 nvme -- common/autotest_common.sh@1095 -- # wait 74634 00:09:16.780 [2024-11-18 06:43:09.636797] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75557) is not found. Dropping the request. 00:09:16.780 [2024-11-18 06:43:09.636858] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75557) is not found. Dropping the request. 00:09:16.780 [2024-11-18 06:43:09.636874] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75557) is not found. Dropping the request. 00:09:16.780 [2024-11-18 06:43:09.636889] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75557) is not found. Dropping the request. 00:09:16.780 [2024-11-18 06:43:09.637487] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75557) is not found. Dropping the request. 00:09:16.780 [2024-11-18 06:43:09.637540] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75557) is not found. Dropping the request. 00:09:16.780 [2024-11-18 06:43:09.637565] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75557) is not found. Dropping the request. 00:09:16.780 [2024-11-18 06:43:09.637592] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75557) is not found. Dropping the request. 00:09:16.780 [2024-11-18 06:43:09.638382] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75557) is not found. Dropping the request. 00:09:16.780 [2024-11-18 06:43:09.638464] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75557) is not found. Dropping the request. 00:09:16.780 [2024-11-18 06:43:09.638503] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75557) is not found. Dropping the request. 00:09:16.780 [2024-11-18 06:43:09.638548] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75557) is not found. Dropping the request. 00:09:16.781 [2024-11-18 06:43:09.639345] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75557) is not found. Dropping the request. 00:09:16.781 [2024-11-18 06:43:09.639421] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75557) is not found. Dropping the request. 00:09:16.781 [2024-11-18 06:43:09.639447] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75557) is not found. Dropping the request. 00:09:16.781 [2024-11-18 06:43:09.639474] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75557) is not found. Dropping the request. 00:09:16.781 06:43:09 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:09:16.781 06:43:09 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:09:16.781 06:43:09 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:16.781 06:43:09 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:16.781 06:43:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.781 06:43:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:16.781 ************************************ 00:09:16.781 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:16.781 ************************************ 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:16.781 * Looking for test storage... 00:09:16.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:16.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.781 --rc genhtml_branch_coverage=1 00:09:16.781 --rc genhtml_function_coverage=1 00:09:16.781 --rc genhtml_legend=1 00:09:16.781 --rc geninfo_all_blocks=1 00:09:16.781 --rc geninfo_unexecuted_blocks=1 00:09:16.781 00:09:16.781 ' 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:16.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.781 --rc genhtml_branch_coverage=1 00:09:16.781 --rc genhtml_function_coverage=1 00:09:16.781 --rc genhtml_legend=1 00:09:16.781 --rc geninfo_all_blocks=1 00:09:16.781 --rc geninfo_unexecuted_blocks=1 00:09:16.781 00:09:16.781 ' 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:16.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.781 --rc genhtml_branch_coverage=1 00:09:16.781 --rc genhtml_function_coverage=1 00:09:16.781 --rc genhtml_legend=1 00:09:16.781 --rc geninfo_all_blocks=1 00:09:16.781 --rc geninfo_unexecuted_blocks=1 00:09:16.781 00:09:16.781 ' 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:16.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.781 --rc genhtml_branch_coverage=1 00:09:16.781 --rc genhtml_function_coverage=1 00:09:16.781 --rc genhtml_legend=1 00:09:16.781 --rc geninfo_all_blocks=1 00:09:16.781 --rc geninfo_unexecuted_blocks=1 00:09:16.781 00:09:16.781 ' 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:16.781 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:17.043 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:17.043 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:17.043 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:17.043 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:17.043 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:17.043 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=75847 00:09:17.043 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:17.043 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:17.043 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 75847 00:09:17.043 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 75847 ']' 00:09:17.043 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.043 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.043 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.043 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.043 06:43:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:17.043 [2024-11-18 06:43:09.977794] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:17.043 [2024-11-18 06:43:09.977937] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75847 ] 00:09:17.304 [2024-11-18 06:43:10.153580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:17.304 [2024-11-18 06:43:10.188839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.304 [2024-11-18 06:43:10.189161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:17.304 [2024-11-18 06:43:10.189527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:17.304 [2024-11-18 06:43:10.189570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.877 06:43:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.877 06:43:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:09:17.877 06:43:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:17.877 06:43:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.877 06:43:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:17.877 nvme0n1 00:09:17.877 06:43:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.877 06:43:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:17.877 06:43:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_PRW9z.txt 00:09:17.877 06:43:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:17.877 06:43:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.877 06:43:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:17.877 true 00:09:17.877 06:43:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.877 06:43:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:17.877 06:43:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1731912190 00:09:17.877 06:43:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=75865 00:09:17.877 06:43:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:17.877 06:43:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:17.877 06:43:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:19.793 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:19.793 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.793 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:19.793 [2024-11-18 06:43:12.877004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:19.793 [2024-11-18 06:43:12.877493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:19.793 [2024-11-18 06:43:12.877534] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:19.793 [2024-11-18 06:43:12.877566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:20.055 [2024-11-18 06:43:12.879180] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.055 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 75865 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 75865 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 75865 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_PRW9z.txt 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_PRW9z.txt 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 75847 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 75847 ']' 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 75847 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75847 00:09:20.055 killing process with pid 75847 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75847' 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 75847 00:09:20.055 06:43:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 75847 00:09:20.316 06:43:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:20.316 06:43:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:20.316 00:09:20.316 real 0m3.516s 00:09:20.316 user 0m12.411s 00:09:20.316 sys 0m0.547s 00:09:20.316 06:43:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.316 ************************************ 00:09:20.316 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:20.316 ************************************ 00:09:20.316 06:43:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:20.316 06:43:13 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:20.316 06:43:13 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:20.316 06:43:13 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:20.316 06:43:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.316 06:43:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:20.316 ************************************ 00:09:20.316 START TEST nvme_fio 00:09:20.316 ************************************ 00:09:20.316 06:43:13 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:09:20.316 06:43:13 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:20.316 06:43:13 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:20.316 06:43:13 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:20.316 06:43:13 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:20.316 06:43:13 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:09:20.316 06:43:13 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:20.316 06:43:13 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:20.316 06:43:13 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:20.316 06:43:13 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:20.316 06:43:13 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:20.316 06:43:13 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:20.316 06:43:13 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:20.316 06:43:13 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:20.316 06:43:13 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:20.316 06:43:13 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:20.579 06:43:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:20.579 06:43:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:20.841 06:43:13 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:20.841 06:43:13 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:20.841 06:43:13 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:20.841 06:43:13 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:20.841 06:43:13 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:20.841 06:43:13 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:20.841 06:43:13 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:20.841 06:43:13 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:20.841 06:43:13 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:20.841 06:43:13 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:20.841 06:43:13 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:20.841 06:43:13 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:20.841 06:43:13 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:20.841 06:43:13 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:20.841 06:43:13 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:20.841 06:43:13 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:20.841 06:43:13 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:20.841 06:43:13 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:21.102 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:21.102 fio-3.35 00:09:21.102 Starting 1 thread 00:09:26.394 00:09:26.394 test: (groupid=0, jobs=1): err= 0: pid=75993: Mon Nov 18 06:43:18 2024 00:09:26.394 read: IOPS=22.4k, BW=87.3MiB/s (91.6MB/s)(175MiB/2001msec) 00:09:26.394 slat (nsec): min=3788, max=88069, avg=5103.92, stdev=2207.87 00:09:26.394 clat (usec): min=192, max=14709, avg=2853.64, stdev=921.68 00:09:26.394 lat (usec): min=197, max=14762, avg=2858.74, stdev=922.81 00:09:26.394 clat percentiles (usec): 00:09:26.394 | 1.00th=[ 1958], 5.00th=[ 2147], 10.00th=[ 2212], 20.00th=[ 2311], 00:09:26.394 | 30.00th=[ 2376], 40.00th=[ 2474], 50.00th=[ 2573], 60.00th=[ 2671], 00:09:26.394 | 70.00th=[ 2835], 80.00th=[ 3130], 90.00th=[ 3884], 95.00th=[ 4948], 00:09:26.394 | 99.00th=[ 6259], 99.50th=[ 6587], 99.90th=[ 8717], 99.95th=[12387], 00:09:26.394 | 99.99th=[14615] 00:09:26.394 bw ( KiB/s): min=79728, max=89504, per=94.74%, avg=84736.00, stdev=4892.42, samples=3 00:09:26.394 iops : min=19932, max=22376, avg=21184.00, stdev=1223.10, samples=3 00:09:26.394 write: IOPS=22.2k, BW=86.8MiB/s (91.0MB/s)(174MiB/2001msec); 0 zone resets 00:09:26.394 slat (nsec): min=3907, max=54795, avg=5317.42, stdev=2081.30 00:09:26.394 clat (usec): min=200, max=14636, avg=2870.51, stdev=926.92 00:09:26.394 lat (usec): min=205, max=14653, avg=2875.82, stdev=927.99 00:09:26.394 clat percentiles (usec): 00:09:26.394 | 1.00th=[ 1975], 5.00th=[ 2147], 10.00th=[ 2212], 20.00th=[ 2311], 00:09:26.394 | 30.00th=[ 2409], 40.00th=[ 2474], 50.00th=[ 2573], 60.00th=[ 2704], 00:09:26.394 | 70.00th=[ 2868], 80.00th=[ 3130], 90.00th=[ 3916], 95.00th=[ 4948], 00:09:26.394 | 99.00th=[ 6259], 99.50th=[ 6587], 99.90th=[10683], 99.95th=[12518], 00:09:26.394 | 99.99th=[14484] 00:09:26.394 bw ( KiB/s): min=79632, max=89520, per=95.50%, avg=84837.33, stdev=4964.68, samples=3 00:09:26.394 iops : min=19908, max=22380, avg=21209.33, stdev=1241.17, samples=3 00:09:26.394 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:09:26.394 lat (msec) : 2=1.41%, 4=89.08%, 10=9.37%, 20=0.10% 00:09:26.394 cpu : usr=99.00%, sys=0.15%, ctx=18, majf=0, minf=627 00:09:26.394 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:26.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:26.394 issued rwts: total=44744,44439,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.394 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:26.394 00:09:26.394 Run status group 0 (all jobs): 00:09:26.394 READ: bw=87.3MiB/s (91.6MB/s), 87.3MiB/s-87.3MiB/s (91.6MB/s-91.6MB/s), io=175MiB (183MB), run=2001-2001msec 00:09:26.394 WRITE: bw=86.8MiB/s (91.0MB/s), 86.8MiB/s-86.8MiB/s (91.0MB/s-91.0MB/s), io=174MiB (182MB), run=2001-2001msec 00:09:26.394 ----------------------------------------------------- 00:09:26.394 Suppressions used: 00:09:26.394 count bytes template 00:09:26.394 1 32 /usr/src/fio/parse.c 00:09:26.394 1 8 libtcmalloc_minimal.so 00:09:26.394 ----------------------------------------------------- 00:09:26.394 00:09:26.394 06:43:18 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:26.394 06:43:18 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:26.394 06:43:18 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:26.394 06:43:18 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:26.394 06:43:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:26.394 06:43:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:26.394 06:43:19 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:26.394 06:43:19 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:26.394 06:43:19 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:26.394 06:43:19 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:26.394 06:43:19 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:26.394 06:43:19 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:26.394 06:43:19 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:26.394 06:43:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:26.394 06:43:19 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:26.394 06:43:19 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:26.394 06:43:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:26.394 06:43:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:26.394 06:43:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:26.394 06:43:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:26.394 06:43:19 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:26.394 06:43:19 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:26.394 06:43:19 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:26.394 06:43:19 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:26.394 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:26.394 fio-3.35 00:09:26.394 Starting 1 thread 00:09:33.047 00:09:33.047 test: (groupid=0, jobs=1): err= 0: pid=76048: Mon Nov 18 06:43:25 2024 00:09:33.047 read: IOPS=18.8k, BW=73.4MiB/s (77.0MB/s)(147MiB/2001msec) 00:09:33.047 slat (usec): min=4, max=105, avg= 5.70, stdev= 2.92 00:09:33.047 clat (usec): min=230, max=12170, avg=3380.98, stdev=1272.66 00:09:33.047 lat (usec): min=235, max=12211, avg=3386.68, stdev=1273.98 00:09:33.047 clat percentiles (usec): 00:09:33.047 | 1.00th=[ 1909], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2474], 00:09:33.047 | 30.00th=[ 2638], 40.00th=[ 2769], 50.00th=[ 2933], 60.00th=[ 3097], 00:09:33.047 | 70.00th=[ 3359], 80.00th=[ 4146], 90.00th=[ 5538], 95.00th=[ 6259], 00:09:33.047 | 99.00th=[ 7373], 99.50th=[ 7767], 99.90th=[ 8586], 99.95th=[ 9372], 00:09:33.047 | 99.99th=[11863] 00:09:33.047 bw ( KiB/s): min=61696, max=74256, per=93.06%, avg=69962.67, stdev=7160.93, samples=3 00:09:33.047 iops : min=15424, max=18564, avg=17490.67, stdev=1790.23, samples=3 00:09:33.047 write: IOPS=18.8k, BW=73.4MiB/s (77.0MB/s)(147MiB/2001msec); 0 zone resets 00:09:33.047 slat (nsec): min=4364, max=79744, avg=5931.04, stdev=2834.15 00:09:33.047 clat (usec): min=207, max=11964, avg=3406.02, stdev=1278.80 00:09:33.047 lat (usec): min=212, max=11983, avg=3411.95, stdev=1280.10 00:09:33.047 clat percentiles (usec): 00:09:33.047 | 1.00th=[ 1942], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2507], 00:09:33.047 | 30.00th=[ 2638], 40.00th=[ 2802], 50.00th=[ 2933], 60.00th=[ 3097], 00:09:33.047 | 70.00th=[ 3392], 80.00th=[ 4228], 90.00th=[ 5604], 95.00th=[ 6259], 00:09:33.047 | 99.00th=[ 7373], 99.50th=[ 7832], 99.90th=[ 8717], 99.95th=[ 9503], 00:09:33.047 | 99.99th=[11600] 00:09:33.047 bw ( KiB/s): min=61640, max=74320, per=93.04%, avg=69970.67, stdev=7216.91, samples=3 00:09:33.047 iops : min=15410, max=18580, avg=17492.67, stdev=1804.23, samples=3 00:09:33.047 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.02% 00:09:33.047 lat (msec) : 2=1.25%, 4=77.04%, 10=21.62%, 20=0.04% 00:09:33.047 cpu : usr=98.85%, sys=0.15%, ctx=27, majf=0, minf=626 00:09:33.047 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:33.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:33.047 issued rwts: total=37608,37622,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.047 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:33.047 00:09:33.047 Run status group 0 (all jobs): 00:09:33.047 READ: bw=73.4MiB/s (77.0MB/s), 73.4MiB/s-73.4MiB/s (77.0MB/s-77.0MB/s), io=147MiB (154MB), run=2001-2001msec 00:09:33.047 WRITE: bw=73.4MiB/s (77.0MB/s), 73.4MiB/s-73.4MiB/s (77.0MB/s-77.0MB/s), io=147MiB (154MB), run=2001-2001msec 00:09:33.047 ----------------------------------------------------- 00:09:33.047 Suppressions used: 00:09:33.047 count bytes template 00:09:33.047 1 32 /usr/src/fio/parse.c 00:09:33.047 1 8 libtcmalloc_minimal.so 00:09:33.047 ----------------------------------------------------- 00:09:33.047 00:09:33.047 06:43:25 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:33.047 06:43:25 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:33.047 06:43:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:33.047 06:43:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:33.047 06:43:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:33.047 06:43:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:33.047 06:43:25 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:33.047 06:43:25 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:33.047 06:43:25 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:33.047 06:43:25 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:33.047 06:43:25 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:33.047 06:43:25 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:33.047 06:43:25 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:33.047 06:43:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:33.047 06:43:25 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:33.047 06:43:25 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:33.047 06:43:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:33.047 06:43:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:33.047 06:43:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:33.047 06:43:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:33.047 06:43:25 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:33.047 06:43:25 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:33.047 06:43:25 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:33.047 06:43:25 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:33.047 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:33.047 fio-3.35 00:09:33.047 Starting 1 thread 00:09:38.343 00:09:38.343 test: (groupid=0, jobs=1): err= 0: pid=76108: Mon Nov 18 06:43:31 2024 00:09:38.343 read: IOPS=15.5k, BW=60.4MiB/s (63.4MB/s)(121MiB/2001msec) 00:09:38.343 slat (nsec): min=4863, max=82751, avg=6601.67, stdev=3729.07 00:09:38.343 clat (usec): min=442, max=11213, avg=4114.84, stdev=1406.84 00:09:38.343 lat (usec): min=448, max=11237, avg=4121.44, stdev=1408.08 00:09:38.343 clat percentiles (usec): 00:09:38.343 | 1.00th=[ 2212], 5.00th=[ 2671], 10.00th=[ 2802], 20.00th=[ 2999], 00:09:38.343 | 30.00th=[ 3130], 40.00th=[ 3294], 50.00th=[ 3523], 60.00th=[ 3949], 00:09:38.343 | 70.00th=[ 4686], 80.00th=[ 5407], 90.00th=[ 6259], 95.00th=[ 6980], 00:09:38.343 | 99.00th=[ 7898], 99.50th=[ 8291], 99.90th=[ 9765], 99.95th=[10552], 00:09:38.343 | 99.99th=[10945] 00:09:38.343 bw ( KiB/s): min=57480, max=67248, per=100.00%, avg=62158.67, stdev=4896.93, samples=3 00:09:38.343 iops : min=14370, max=16812, avg=15539.67, stdev=1224.23, samples=3 00:09:38.343 write: IOPS=15.5k, BW=60.5MiB/s (63.4MB/s)(121MiB/2001msec); 0 zone resets 00:09:38.343 slat (nsec): min=4989, max=99172, avg=6816.20, stdev=3737.79 00:09:38.343 clat (usec): min=452, max=10949, avg=4133.78, stdev=1398.77 00:09:38.343 lat (usec): min=459, max=10954, avg=4140.60, stdev=1399.96 00:09:38.343 clat percentiles (usec): 00:09:38.343 | 1.00th=[ 2180], 5.00th=[ 2704], 10.00th=[ 2835], 20.00th=[ 3032], 00:09:38.343 | 30.00th=[ 3163], 40.00th=[ 3326], 50.00th=[ 3556], 60.00th=[ 3982], 00:09:38.343 | 70.00th=[ 4686], 80.00th=[ 5407], 90.00th=[ 6259], 95.00th=[ 6980], 00:09:38.343 | 99.00th=[ 7898], 99.50th=[ 8356], 99.90th=[ 9765], 99.95th=[10552], 00:09:38.343 | 99.99th=[10814] 00:09:38.343 bw ( KiB/s): min=56528, max=66480, per=99.64%, avg=61715.67, stdev=4989.49, samples=3 00:09:38.343 iops : min=14132, max=16620, avg=15428.67, stdev=1247.34, samples=3 00:09:38.343 lat (usec) : 500=0.01%, 750=0.01% 00:09:38.343 lat (msec) : 2=0.60%, 4=59.77%, 10=39.54%, 20=0.07% 00:09:38.343 cpu : usr=98.55%, sys=0.20%, ctx=4, majf=0, minf=626 00:09:38.343 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:38.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:38.343 issued rwts: total=30957,30983,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.343 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:38.343 00:09:38.343 Run status group 0 (all jobs): 00:09:38.343 READ: bw=60.4MiB/s (63.4MB/s), 60.4MiB/s-60.4MiB/s (63.4MB/s-63.4MB/s), io=121MiB (127MB), run=2001-2001msec 00:09:38.343 WRITE: bw=60.5MiB/s (63.4MB/s), 60.5MiB/s-60.5MiB/s (63.4MB/s-63.4MB/s), io=121MiB (127MB), run=2001-2001msec 00:09:38.604 ----------------------------------------------------- 00:09:38.604 Suppressions used: 00:09:38.604 count bytes template 00:09:38.604 1 32 /usr/src/fio/parse.c 00:09:38.604 1 8 libtcmalloc_minimal.so 00:09:38.604 ----------------------------------------------------- 00:09:38.604 00:09:38.604 06:43:31 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:38.604 06:43:31 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:38.604 06:43:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:38.604 06:43:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:38.864 06:43:31 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:38.864 06:43:31 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:39.125 06:43:32 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:39.125 06:43:32 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:39.126 06:43:32 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:39.126 06:43:32 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:39.126 06:43:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:39.126 06:43:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:39.126 06:43:32 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:39.126 06:43:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:39.126 06:43:32 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:39.126 06:43:32 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:39.126 06:43:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:39.126 06:43:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:39.126 06:43:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:39.126 06:43:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:39.126 06:43:32 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:39.126 06:43:32 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:39.126 06:43:32 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:39.126 06:43:32 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:39.126 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:39.126 fio-3.35 00:09:39.126 Starting 1 thread 00:09:43.338 00:09:43.338 test: (groupid=0, jobs=1): err= 0: pid=76171: Mon Nov 18 06:43:36 2024 00:09:43.338 read: IOPS=13.7k, BW=53.7MiB/s (56.3MB/s)(107MiB/2001msec) 00:09:43.338 slat (nsec): min=4895, max=83663, avg=7018.31, stdev=4533.59 00:09:43.338 clat (usec): min=453, max=11433, avg=4624.76, stdev=1532.99 00:09:43.338 lat (usec): min=459, max=11468, avg=4631.78, stdev=1534.25 00:09:43.338 clat percentiles (usec): 00:09:43.338 | 1.00th=[ 2409], 5.00th=[ 2737], 10.00th=[ 2933], 20.00th=[ 3130], 00:09:43.338 | 30.00th=[ 3392], 40.00th=[ 3785], 50.00th=[ 4424], 60.00th=[ 5014], 00:09:43.338 | 70.00th=[ 5538], 80.00th=[ 5997], 90.00th=[ 6718], 95.00th=[ 7308], 00:09:43.338 | 99.00th=[ 8586], 99.50th=[ 9110], 99.90th=[10421], 99.95th=[10814], 00:09:43.338 | 99.99th=[11338] 00:09:43.338 bw ( KiB/s): min=50872, max=57176, per=97.19%, avg=53394.67, stdev=3335.16, samples=3 00:09:43.338 iops : min=12718, max=14294, avg=13348.67, stdev=833.79, samples=3 00:09:43.338 write: IOPS=13.7k, BW=53.6MiB/s (56.2MB/s)(107MiB/2001msec); 0 zone resets 00:09:43.338 slat (nsec): min=4994, max=83398, avg=7262.29, stdev=4455.62 00:09:43.338 clat (usec): min=420, max=11353, avg=4674.93, stdev=1535.15 00:09:43.338 lat (usec): min=437, max=11369, avg=4682.19, stdev=1536.45 00:09:43.338 clat percentiles (usec): 00:09:43.338 | 1.00th=[ 2442], 5.00th=[ 2769], 10.00th=[ 2933], 20.00th=[ 3163], 00:09:43.338 | 30.00th=[ 3425], 40.00th=[ 3851], 50.00th=[ 4490], 60.00th=[ 5080], 00:09:43.338 | 70.00th=[ 5538], 80.00th=[ 6063], 90.00th=[ 6718], 95.00th=[ 7308], 00:09:43.338 | 99.00th=[ 8586], 99.50th=[ 9241], 99.90th=[10552], 99.95th=[10814], 00:09:43.338 | 99.99th=[11207] 00:09:43.338 bw ( KiB/s): min=51200, max=56960, per=97.50%, avg=53474.67, stdev=3064.91, samples=3 00:09:43.338 iops : min=12800, max=14240, avg=13368.67, stdev=766.23, samples=3 00:09:43.338 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.01% 00:09:43.338 lat (msec) : 2=0.20%, 4=42.92%, 10=56.62%, 20=0.21% 00:09:43.338 cpu : usr=98.20%, sys=0.10%, ctx=15, majf=0, minf=624 00:09:43.338 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:43.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:43.338 issued rwts: total=27484,27437,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.338 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:43.338 00:09:43.338 Run status group 0 (all jobs): 00:09:43.338 READ: bw=53.7MiB/s (56.3MB/s), 53.7MiB/s-53.7MiB/s (56.3MB/s-56.3MB/s), io=107MiB (113MB), run=2001-2001msec 00:09:43.338 WRITE: bw=53.6MiB/s (56.2MB/s), 53.6MiB/s-53.6MiB/s (56.2MB/s-56.2MB/s), io=107MiB (112MB), run=2001-2001msec 00:09:43.600 ----------------------------------------------------- 00:09:43.600 Suppressions used: 00:09:43.600 count bytes template 00:09:43.600 1 32 /usr/src/fio/parse.c 00:09:43.600 1 8 libtcmalloc_minimal.so 00:09:43.600 ----------------------------------------------------- 00:09:43.600 00:09:43.600 ************************************ 00:09:43.600 END TEST nvme_fio 00:09:43.600 ************************************ 00:09:43.600 06:43:36 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:43.600 06:43:36 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:43.600 00:09:43.600 real 0m23.358s 00:09:43.600 user 0m16.714s 00:09:43.600 sys 0m9.660s 00:09:43.600 06:43:36 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.600 06:43:36 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:43.600 ************************************ 00:09:43.600 END TEST nvme 00:09:43.600 ************************************ 00:09:43.600 00:09:43.600 real 1m32.027s 00:09:43.600 user 3m33.296s 00:09:43.600 sys 0m20.371s 00:09:43.600 06:43:36 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.600 06:43:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:43.862 06:43:36 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:43.862 06:43:36 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:43.863 06:43:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:43.863 06:43:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.863 06:43:36 -- common/autotest_common.sh@10 -- # set +x 00:09:43.863 ************************************ 00:09:43.863 START TEST nvme_scc 00:09:43.863 ************************************ 00:09:43.863 06:43:36 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:43.863 * Looking for test storage... 00:09:43.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:43.863 06:43:36 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:43.863 06:43:36 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:43.863 06:43:36 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:43.863 06:43:36 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:43.863 06:43:36 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.863 06:43:36 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.863 06:43:36 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.863 06:43:36 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.863 06:43:36 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.863 06:43:36 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.863 06:43:36 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.863 06:43:36 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.863 06:43:36 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.863 06:43:36 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.863 06:43:36 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.863 06:43:36 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:43.863 06:43:36 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:43.863 06:43:36 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.863 06:43:36 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.863 06:43:36 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:43.863 06:43:36 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:43.863 06:43:36 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.863 06:43:36 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:43.863 06:43:36 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.863 06:43:36 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:43.863 06:43:36 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:43.863 06:43:36 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.863 06:43:36 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:43.863 06:43:36 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.863 06:43:36 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.863 06:43:36 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.863 06:43:36 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:43.863 06:43:36 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.863 06:43:36 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:43.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.863 --rc genhtml_branch_coverage=1 00:09:43.863 --rc genhtml_function_coverage=1 00:09:43.863 --rc genhtml_legend=1 00:09:43.863 --rc geninfo_all_blocks=1 00:09:43.863 --rc geninfo_unexecuted_blocks=1 00:09:43.863 00:09:43.863 ' 00:09:43.863 06:43:36 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:43.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.863 --rc genhtml_branch_coverage=1 00:09:43.863 --rc genhtml_function_coverage=1 00:09:43.863 --rc genhtml_legend=1 00:09:43.863 --rc geninfo_all_blocks=1 00:09:43.863 --rc geninfo_unexecuted_blocks=1 00:09:43.863 00:09:43.863 ' 00:09:43.863 06:43:36 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:43.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.863 --rc genhtml_branch_coverage=1 00:09:43.863 --rc genhtml_function_coverage=1 00:09:43.863 --rc genhtml_legend=1 00:09:43.863 --rc geninfo_all_blocks=1 00:09:43.863 --rc geninfo_unexecuted_blocks=1 00:09:43.863 00:09:43.863 ' 00:09:43.863 06:43:36 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:43.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.863 --rc genhtml_branch_coverage=1 00:09:43.863 --rc genhtml_function_coverage=1 00:09:43.863 --rc genhtml_legend=1 00:09:43.863 --rc geninfo_all_blocks=1 00:09:43.863 --rc geninfo_unexecuted_blocks=1 00:09:43.863 00:09:43.863 ' 00:09:43.863 06:43:36 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:43.863 06:43:36 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:43.863 06:43:36 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:43.863 06:43:36 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:43.863 06:43:36 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:43.863 06:43:36 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:43.863 06:43:36 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.863 06:43:36 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.863 06:43:36 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.863 06:43:36 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.863 06:43:36 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.863 06:43:36 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.863 06:43:36 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:43.863 06:43:36 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.863 06:43:36 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:43.863 06:43:36 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:43.863 06:43:36 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:43.863 06:43:36 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:43.863 06:43:36 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:43.863 06:43:36 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:43.863 06:43:36 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:43.863 06:43:36 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:43.863 06:43:36 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:43.863 06:43:36 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:43.863 06:43:36 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:43.863 06:43:36 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:43.863 06:43:36 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:43.863 06:43:36 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:44.436 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:44.436 Waiting for block devices as requested 00:09:44.436 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:44.436 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:44.698 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:44.698 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:50.011 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:50.011 06:43:42 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:50.011 06:43:42 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:50.011 06:43:42 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:50.011 06:43:42 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:50.011 06:43:42 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.011 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:50.012 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:50.013 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.014 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:50.015 06:43:42 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:50.015 06:43:42 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:50.015 06:43:42 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:50.016 06:43:42 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:50.016 06:43:42 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.016 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.017 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:50.018 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.019 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:50.020 06:43:42 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:50.020 06:43:42 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:50.020 06:43:42 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:50.020 06:43:42 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:50.020 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.021 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.022 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.023 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:50.023 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:50.023 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:50.023 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.023 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.023 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.023 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:50.023 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:50.023 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.023 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.023 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.023 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:50.023 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:50.023 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.023 06:43:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.023 06:43:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.023 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:50.023 06:43:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.023 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.024 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.025 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:50.026 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:50.027 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:50.028 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:50.290 06:43:43 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:50.290 06:43:43 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:50.290 06:43:43 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:50.290 06:43:43 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.290 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:50.291 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.292 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:50.293 06:43:43 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:50.293 06:43:43 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:50.294 06:43:43 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:50.294 06:43:43 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:50.294 06:43:43 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:50.294 06:43:43 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:50.864 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:51.437 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:51.437 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:51.437 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:51.437 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:51.437 06:43:44 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:51.437 06:43:44 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:51.437 06:43:44 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.437 06:43:44 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:51.437 ************************************ 00:09:51.437 START TEST nvme_simple_copy 00:09:51.437 ************************************ 00:09:51.437 06:43:44 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:51.698 Initializing NVMe Controllers 00:09:51.698 Attaching to 0000:00:10.0 00:09:51.698 Controller supports SCC. Attached to 0000:00:10.0 00:09:51.698 Namespace ID: 1 size: 6GB 00:09:51.698 Initialization complete. 00:09:51.698 00:09:51.698 Controller QEMU NVMe Ctrl (12340 ) 00:09:51.698 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:51.698 Namespace Block Size:4096 00:09:51.698 Writing LBAs 0 to 63 with Random Data 00:09:51.698 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:51.698 LBAs matching Written Data: 64 00:09:51.698 00:09:51.698 real 0m0.274s 00:09:51.698 user 0m0.113s 00:09:51.698 sys 0m0.058s 00:09:51.698 ************************************ 00:09:51.698 END TEST nvme_simple_copy 00:09:51.698 ************************************ 00:09:51.698 06:43:44 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.698 06:43:44 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:51.698 00:09:51.698 real 0m7.980s 00:09:51.698 user 0m1.149s 00:09:51.698 sys 0m1.516s 00:09:51.698 ************************************ 00:09:51.698 END TEST nvme_scc 00:09:51.698 ************************************ 00:09:51.698 06:43:44 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.698 06:43:44 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:51.698 06:43:44 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:51.698 06:43:44 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:51.698 06:43:44 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:51.698 06:43:44 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:51.698 06:43:44 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:51.698 06:43:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:51.698 06:43:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.698 06:43:44 -- common/autotest_common.sh@10 -- # set +x 00:09:51.961 ************************************ 00:09:51.961 START TEST nvme_fdp 00:09:51.961 ************************************ 00:09:51.961 06:43:44 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:51.961 * Looking for test storage... 00:09:51.961 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:51.961 06:43:44 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:51.961 06:43:44 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:51.961 06:43:44 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:51.961 06:43:44 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:51.961 06:43:44 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:51.961 06:43:44 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:51.961 06:43:44 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:51.961 06:43:44 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:51.961 06:43:44 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:51.961 06:43:44 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:51.961 06:43:44 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:51.961 06:43:44 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:51.961 06:43:44 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:51.961 06:43:44 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:51.961 06:43:44 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:51.961 06:43:44 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:51.961 06:43:44 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:51.961 06:43:44 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:51.961 06:43:44 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:51.961 06:43:44 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:51.961 06:43:44 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:51.961 06:43:44 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:51.961 06:43:44 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:51.961 06:43:44 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:51.961 06:43:44 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:51.961 06:43:44 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:51.961 06:43:44 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:51.961 06:43:44 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:51.961 06:43:44 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:51.961 06:43:44 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:51.961 06:43:44 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:51.961 06:43:44 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:51.961 06:43:44 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:51.961 06:43:44 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:51.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.961 --rc genhtml_branch_coverage=1 00:09:51.961 --rc genhtml_function_coverage=1 00:09:51.961 --rc genhtml_legend=1 00:09:51.961 --rc geninfo_all_blocks=1 00:09:51.961 --rc geninfo_unexecuted_blocks=1 00:09:51.961 00:09:51.961 ' 00:09:51.961 06:43:44 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:51.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.961 --rc genhtml_branch_coverage=1 00:09:51.961 --rc genhtml_function_coverage=1 00:09:51.961 --rc genhtml_legend=1 00:09:51.961 --rc geninfo_all_blocks=1 00:09:51.961 --rc geninfo_unexecuted_blocks=1 00:09:51.961 00:09:51.961 ' 00:09:51.961 06:43:44 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:51.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.961 --rc genhtml_branch_coverage=1 00:09:51.961 --rc genhtml_function_coverage=1 00:09:51.961 --rc genhtml_legend=1 00:09:51.961 --rc geninfo_all_blocks=1 00:09:51.961 --rc geninfo_unexecuted_blocks=1 00:09:51.961 00:09:51.961 ' 00:09:51.961 06:43:44 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:51.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.961 --rc genhtml_branch_coverage=1 00:09:51.961 --rc genhtml_function_coverage=1 00:09:51.961 --rc genhtml_legend=1 00:09:51.961 --rc geninfo_all_blocks=1 00:09:51.961 --rc geninfo_unexecuted_blocks=1 00:09:51.961 00:09:51.961 ' 00:09:51.961 06:43:44 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:51.961 06:43:44 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:51.961 06:43:44 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:51.961 06:43:44 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:51.961 06:43:44 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:51.961 06:43:44 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:51.961 06:43:44 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:51.961 06:43:44 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:51.961 06:43:44 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:51.961 06:43:44 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.961 06:43:44 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.961 06:43:44 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.961 06:43:44 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:51.961 06:43:44 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.961 06:43:44 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:51.961 06:43:44 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:51.961 06:43:44 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:51.961 06:43:44 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:51.961 06:43:44 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:51.961 06:43:44 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:51.961 06:43:44 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:51.961 06:43:44 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:51.961 06:43:44 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:51.961 06:43:44 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:51.961 06:43:44 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:52.222 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:52.484 Waiting for block devices as requested 00:09:52.484 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:52.484 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:52.748 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:52.748 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:58.147 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:58.147 06:43:50 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:58.147 06:43:50 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:58.147 06:43:50 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:58.147 06:43:50 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:58.147 06:43:50 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:58.147 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.148 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.149 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.150 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:58.151 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:58.152 06:43:50 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:58.152 06:43:50 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:58.152 06:43:50 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:58.152 06:43:50 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.152 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:58.153 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:58.154 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.155 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:58.156 06:43:50 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:58.156 06:43:50 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:58.156 06:43:50 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:58.156 06:43:50 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.156 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:58.157 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:58.158 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.159 06:43:50 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:58.160 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:58.160 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.160 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.160 06:43:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:58.160 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:58.160 06:43:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:58.160 06:43:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.160 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.161 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:58.162 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.163 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:58.164 06:43:51 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:58.164 06:43:51 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:58.164 06:43:51 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:58.164 06:43:51 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:58.164 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.165 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:58.166 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:58.167 06:43:51 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:58.167 06:43:51 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:58.168 06:43:51 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:58.168 06:43:51 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:58.168 06:43:51 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:58.168 06:43:51 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:58.740 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:59.311 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:59.311 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:59.311 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:59.311 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:59.311 06:43:52 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:59.311 06:43:52 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:59.312 06:43:52 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.312 06:43:52 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:59.312 ************************************ 00:09:59.312 START TEST nvme_flexible_data_placement 00:09:59.312 ************************************ 00:09:59.312 06:43:52 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:59.573 Initializing NVMe Controllers 00:09:59.573 Attaching to 0000:00:13.0 00:09:59.573 Controller supports FDP Attached to 0000:00:13.0 00:09:59.573 Namespace ID: 1 Endurance Group ID: 1 00:09:59.573 Initialization complete. 00:09:59.573 00:09:59.573 ================================== 00:09:59.573 == FDP tests for Namespace: #01 == 00:09:59.573 ================================== 00:09:59.573 00:09:59.573 Get Feature: FDP: 00:09:59.573 ================= 00:09:59.573 Enabled: Yes 00:09:59.573 FDP configuration Index: 0 00:09:59.573 00:09:59.573 FDP configurations log page 00:09:59.573 =========================== 00:09:59.573 Number of FDP configurations: 1 00:09:59.573 Version: 0 00:09:59.573 Size: 112 00:09:59.573 FDP Configuration Descriptor: 0 00:09:59.573 Descriptor Size: 96 00:09:59.573 Reclaim Group Identifier format: 2 00:09:59.573 FDP Volatile Write Cache: Not Present 00:09:59.573 FDP Configuration: Valid 00:09:59.573 Vendor Specific Size: 0 00:09:59.573 Number of Reclaim Groups: 2 00:09:59.573 Number of Recalim Unit Handles: 8 00:09:59.573 Max Placement Identifiers: 128 00:09:59.573 Number of Namespaces Suppprted: 256 00:09:59.573 Reclaim unit Nominal Size: 6000000 bytes 00:09:59.573 Estimated Reclaim Unit Time Limit: Not Reported 00:09:59.573 RUH Desc #000: RUH Type: Initially Isolated 00:09:59.573 RUH Desc #001: RUH Type: Initially Isolated 00:09:59.573 RUH Desc #002: RUH Type: Initially Isolated 00:09:59.573 RUH Desc #003: RUH Type: Initially Isolated 00:09:59.573 RUH Desc #004: RUH Type: Initially Isolated 00:09:59.573 RUH Desc #005: RUH Type: Initially Isolated 00:09:59.573 RUH Desc #006: RUH Type: Initially Isolated 00:09:59.573 RUH Desc #007: RUH Type: Initially Isolated 00:09:59.573 00:09:59.573 FDP reclaim unit handle usage log page 00:09:59.573 ====================================== 00:09:59.573 Number of Reclaim Unit Handles: 8 00:09:59.573 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:59.573 RUH Usage Desc #001: RUH Attributes: Unused 00:09:59.573 RUH Usage Desc #002: RUH Attributes: Unused 00:09:59.573 RUH Usage Desc #003: RUH Attributes: Unused 00:09:59.573 RUH Usage Desc #004: RUH Attributes: Unused 00:09:59.573 RUH Usage Desc #005: RUH Attributes: Unused 00:09:59.573 RUH Usage Desc #006: RUH Attributes: Unused 00:09:59.573 RUH Usage Desc #007: RUH Attributes: Unused 00:09:59.573 00:09:59.573 FDP statistics log page 00:09:59.573 ======================= 00:09:59.573 Host bytes with metadata written: 2084798464 00:09:59.573 Media bytes with metadata written: 2085908480 00:09:59.573 Media bytes erased: 0 00:09:59.573 00:09:59.573 FDP Reclaim unit handle status 00:09:59.573 ============================== 00:09:59.573 Number of RUHS descriptors: 2 00:09:59.573 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000001bc8 00:09:59.573 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:59.573 00:09:59.573 FDP write on placement id: 0 success 00:09:59.573 00:09:59.573 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:59.573 00:09:59.573 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:59.573 00:09:59.573 Get Feature: FDP Events for Placement handle: #0 00:09:59.573 ======================== 00:09:59.573 Number of FDP Events: 6 00:09:59.573 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:59.573 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:59.573 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:59.573 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:59.573 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:59.573 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:59.573 00:09:59.573 FDP events log page 00:09:59.573 =================== 00:09:59.573 Number of FDP events: 1 00:09:59.573 FDP Event #0: 00:09:59.573 Event Type: RU Not Written to Capacity 00:09:59.573 Placement Identifier: Valid 00:09:59.573 NSID: Valid 00:09:59.573 Location: Valid 00:09:59.573 Placement Identifier: 0 00:09:59.573 Event Timestamp: 4 00:09:59.573 Namespace Identifier: 1 00:09:59.573 Reclaim Group Identifier: 0 00:09:59.573 Reclaim Unit Handle Identifier: 0 00:09:59.573 00:09:59.573 FDP test passed 00:09:59.573 00:09:59.573 real 0m0.234s 00:09:59.573 user 0m0.066s 00:09:59.573 sys 0m0.066s 00:09:59.573 ************************************ 00:09:59.573 END TEST nvme_flexible_data_placement 00:09:59.573 ************************************ 00:09:59.573 06:43:52 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.573 06:43:52 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:59.835 ************************************ 00:09:59.835 END TEST nvme_fdp 00:09:59.835 ************************************ 00:09:59.835 00:09:59.835 real 0m7.880s 00:09:59.835 user 0m1.082s 00:09:59.835 sys 0m1.477s 00:09:59.835 06:43:52 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.835 06:43:52 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:59.835 06:43:52 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:59.835 06:43:52 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:59.835 06:43:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:59.835 06:43:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.835 06:43:52 -- common/autotest_common.sh@10 -- # set +x 00:09:59.835 ************************************ 00:09:59.835 START TEST nvme_rpc 00:09:59.835 ************************************ 00:09:59.835 06:43:52 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:59.835 * Looking for test storage... 00:09:59.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:59.835 06:43:52 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:59.835 06:43:52 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:59.835 06:43:52 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:59.835 06:43:52 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:59.835 06:43:52 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:59.835 06:43:52 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:59.835 06:43:52 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:59.835 06:43:52 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.835 06:43:52 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:59.835 06:43:52 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:59.835 06:43:52 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:59.835 06:43:52 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:59.835 06:43:52 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:59.835 06:43:52 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:59.835 06:43:52 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:59.835 06:43:52 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:59.835 06:43:52 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:59.835 06:43:52 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:59.835 06:43:52 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.835 06:43:52 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:59.835 06:43:52 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:59.835 06:43:52 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.835 06:43:52 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:59.835 06:43:52 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:59.835 06:43:52 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:59.835 06:43:52 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:59.835 06:43:52 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.835 06:43:52 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:59.835 06:43:52 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:59.835 06:43:52 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:59.835 06:43:52 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:59.835 06:43:52 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:59.835 06:43:52 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.835 06:43:52 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:59.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.835 --rc genhtml_branch_coverage=1 00:09:59.835 --rc genhtml_function_coverage=1 00:09:59.835 --rc genhtml_legend=1 00:09:59.835 --rc geninfo_all_blocks=1 00:09:59.835 --rc geninfo_unexecuted_blocks=1 00:09:59.835 00:09:59.835 ' 00:09:59.835 06:43:52 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:59.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.835 --rc genhtml_branch_coverage=1 00:09:59.835 --rc genhtml_function_coverage=1 00:09:59.835 --rc genhtml_legend=1 00:09:59.835 --rc geninfo_all_blocks=1 00:09:59.835 --rc geninfo_unexecuted_blocks=1 00:09:59.835 00:09:59.835 ' 00:09:59.835 06:43:52 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:59.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.835 --rc genhtml_branch_coverage=1 00:09:59.835 --rc genhtml_function_coverage=1 00:09:59.835 --rc genhtml_legend=1 00:09:59.835 --rc geninfo_all_blocks=1 00:09:59.835 --rc geninfo_unexecuted_blocks=1 00:09:59.835 00:09:59.835 ' 00:09:59.835 06:43:52 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:59.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.835 --rc genhtml_branch_coverage=1 00:09:59.835 --rc genhtml_function_coverage=1 00:09:59.835 --rc genhtml_legend=1 00:09:59.835 --rc geninfo_all_blocks=1 00:09:59.835 --rc geninfo_unexecuted_blocks=1 00:09:59.835 00:09:59.835 ' 00:09:59.835 06:43:52 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:59.835 06:43:52 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:59.835 06:43:52 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:59.835 06:43:52 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:59.835 06:43:52 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:59.835 06:43:52 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:59.835 06:43:52 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:59.835 06:43:52 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:59.835 06:43:52 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:59.835 06:43:52 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:59.835 06:43:52 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:00.097 06:43:52 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:00.097 06:43:52 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:00.097 06:43:52 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:10:00.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.097 06:43:52 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:00.097 06:43:52 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=77537 00:10:00.097 06:43:52 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:00.097 06:43:52 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 77537 00:10:00.097 06:43:52 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 77537 ']' 00:10:00.097 06:43:52 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:00.097 06:43:52 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.097 06:43:52 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.097 06:43:52 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.097 06:43:52 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.097 06:43:52 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:00.097 [2024-11-18 06:43:53.036939] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:10:00.097 [2024-11-18 06:43:53.037323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77537 ] 00:10:00.357 [2024-11-18 06:43:53.198746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:00.357 [2024-11-18 06:43:53.228921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.357 [2024-11-18 06:43:53.228971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.929 06:43:53 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.929 06:43:53 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:00.929 06:43:53 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:01.190 Nvme0n1 00:10:01.190 06:43:54 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:01.190 06:43:54 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:01.451 request: 00:10:01.451 { 00:10:01.451 "bdev_name": "Nvme0n1", 00:10:01.451 "filename": "non_existing_file", 00:10:01.451 "method": "bdev_nvme_apply_firmware", 00:10:01.451 "req_id": 1 00:10:01.451 } 00:10:01.451 Got JSON-RPC error response 00:10:01.451 response: 00:10:01.451 { 00:10:01.451 "code": -32603, 00:10:01.451 "message": "open file failed." 00:10:01.451 } 00:10:01.451 06:43:54 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:01.451 06:43:54 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:01.451 06:43:54 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:01.712 06:43:54 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:01.712 06:43:54 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 77537 00:10:01.712 06:43:54 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 77537 ']' 00:10:01.712 06:43:54 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 77537 00:10:01.712 06:43:54 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:10:01.712 06:43:54 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:01.712 06:43:54 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77537 00:10:01.712 killing process with pid 77537 00:10:01.712 06:43:54 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:01.712 06:43:54 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:01.712 06:43:54 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77537' 00:10:01.712 06:43:54 nvme_rpc -- common/autotest_common.sh@973 -- # kill 77537 00:10:01.712 06:43:54 nvme_rpc -- common/autotest_common.sh@978 -- # wait 77537 00:10:01.973 00:10:01.973 real 0m2.277s 00:10:01.973 user 0m4.363s 00:10:01.973 sys 0m0.573s 00:10:01.973 06:43:55 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.973 ************************************ 00:10:01.973 END TEST nvme_rpc 00:10:01.973 06:43:55 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.973 ************************************ 00:10:02.235 06:43:55 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:02.235 06:43:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:02.235 06:43:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.235 06:43:55 -- common/autotest_common.sh@10 -- # set +x 00:10:02.235 ************************************ 00:10:02.235 START TEST nvme_rpc_timeouts 00:10:02.235 ************************************ 00:10:02.235 06:43:55 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:02.235 * Looking for test storage... 00:10:02.235 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:02.235 06:43:55 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:02.235 06:43:55 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:10:02.235 06:43:55 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:02.235 06:43:55 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:02.235 06:43:55 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:02.235 06:43:55 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:02.235 06:43:55 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:02.235 06:43:55 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:02.235 06:43:55 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:02.235 06:43:55 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:02.235 06:43:55 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:02.235 06:43:55 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:02.235 06:43:55 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:02.235 06:43:55 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:02.235 06:43:55 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:02.235 06:43:55 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:02.235 06:43:55 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:02.235 06:43:55 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:02.235 06:43:55 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:02.235 06:43:55 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:02.235 06:43:55 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:02.235 06:43:55 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:02.235 06:43:55 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:02.235 06:43:55 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:02.235 06:43:55 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:02.235 06:43:55 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:02.235 06:43:55 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:02.235 06:43:55 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:02.235 06:43:55 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:02.235 06:43:55 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:02.235 06:43:55 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:02.235 06:43:55 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:02.235 06:43:55 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:02.235 06:43:55 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:02.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.235 --rc genhtml_branch_coverage=1 00:10:02.235 --rc genhtml_function_coverage=1 00:10:02.235 --rc genhtml_legend=1 00:10:02.235 --rc geninfo_all_blocks=1 00:10:02.235 --rc geninfo_unexecuted_blocks=1 00:10:02.235 00:10:02.235 ' 00:10:02.235 06:43:55 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:02.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.235 --rc genhtml_branch_coverage=1 00:10:02.235 --rc genhtml_function_coverage=1 00:10:02.235 --rc genhtml_legend=1 00:10:02.235 --rc geninfo_all_blocks=1 00:10:02.235 --rc geninfo_unexecuted_blocks=1 00:10:02.235 00:10:02.235 ' 00:10:02.235 06:43:55 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:02.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.235 --rc genhtml_branch_coverage=1 00:10:02.235 --rc genhtml_function_coverage=1 00:10:02.235 --rc genhtml_legend=1 00:10:02.235 --rc geninfo_all_blocks=1 00:10:02.235 --rc geninfo_unexecuted_blocks=1 00:10:02.235 00:10:02.235 ' 00:10:02.235 06:43:55 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:02.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.235 --rc genhtml_branch_coverage=1 00:10:02.235 --rc genhtml_function_coverage=1 00:10:02.235 --rc genhtml_legend=1 00:10:02.235 --rc geninfo_all_blocks=1 00:10:02.235 --rc geninfo_unexecuted_blocks=1 00:10:02.235 00:10:02.235 ' 00:10:02.235 06:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:02.235 06:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_77591 00:10:02.235 06:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_77591 00:10:02.235 06:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=77623 00:10:02.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.235 06:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:02.235 06:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 77623 00:10:02.235 06:43:55 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 77623 ']' 00:10:02.235 06:43:55 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.235 06:43:55 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:02.235 06:43:55 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.235 06:43:55 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:02.235 06:43:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:02.235 06:43:55 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:02.496 [2024-11-18 06:43:55.333097] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:10:02.496 [2024-11-18 06:43:55.333891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77623 ] 00:10:02.496 [2024-11-18 06:43:55.494506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:02.496 [2024-11-18 06:43:55.524664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.496 [2024-11-18 06:43:55.524724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.439 Checking default timeout settings: 00:10:03.439 06:43:56 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:03.439 06:43:56 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:10:03.439 06:43:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:03.439 06:43:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:03.700 Making settings changes with rpc: 00:10:03.700 06:43:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:03.700 06:43:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:03.700 Check default vs. modified settings: 00:10:03.700 06:43:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:03.700 06:43:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_77591 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_77591 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:04.273 Setting action_on_timeout is changed as expected. 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_77591 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_77591 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:04.273 Setting timeout_us is changed as expected. 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_77591 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_77591 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:04.273 Setting timeout_admin_us is changed as expected. 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_77591 /tmp/settings_modified_77591 00:10:04.273 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 77623 00:10:04.273 06:43:57 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 77623 ']' 00:10:04.273 06:43:57 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 77623 00:10:04.273 06:43:57 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:10:04.273 06:43:57 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.273 06:43:57 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77623 00:10:04.273 killing process with pid 77623 00:10:04.273 06:43:57 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.273 06:43:57 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.273 06:43:57 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77623' 00:10:04.273 06:43:57 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 77623 00:10:04.273 06:43:57 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 77623 00:10:04.534 RPC TIMEOUT SETTING TEST PASSED. 00:10:04.534 06:43:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:04.534 00:10:04.534 real 0m2.344s 00:10:04.534 user 0m4.673s 00:10:04.534 sys 0m0.553s 00:10:04.534 ************************************ 00:10:04.534 END TEST nvme_rpc_timeouts 00:10:04.534 ************************************ 00:10:04.534 06:43:57 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.534 06:43:57 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:04.534 06:43:57 -- spdk/autotest.sh@239 -- # uname -s 00:10:04.534 06:43:57 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:04.534 06:43:57 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:04.534 06:43:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:04.534 06:43:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.534 06:43:57 -- common/autotest_common.sh@10 -- # set +x 00:10:04.534 ************************************ 00:10:04.534 START TEST sw_hotplug 00:10:04.534 ************************************ 00:10:04.534 06:43:57 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:04.534 * Looking for test storage... 00:10:04.534 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:04.534 06:43:57 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:04.534 06:43:57 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:10:04.534 06:43:57 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:04.796 06:43:57 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:04.796 06:43:57 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.796 06:43:57 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.796 06:43:57 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.796 06:43:57 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.796 06:43:57 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.796 06:43:57 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.796 06:43:57 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.796 06:43:57 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.796 06:43:57 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.796 06:43:57 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.796 06:43:57 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.796 06:43:57 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:04.796 06:43:57 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:04.796 06:43:57 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.796 06:43:57 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.796 06:43:57 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:04.796 06:43:57 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:04.796 06:43:57 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.796 06:43:57 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:04.796 06:43:57 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.796 06:43:57 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:04.796 06:43:57 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:04.796 06:43:57 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.796 06:43:57 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:04.796 06:43:57 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.796 06:43:57 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.796 06:43:57 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.796 06:43:57 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:04.796 06:43:57 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.796 06:43:57 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:04.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.796 --rc genhtml_branch_coverage=1 00:10:04.796 --rc genhtml_function_coverage=1 00:10:04.796 --rc genhtml_legend=1 00:10:04.796 --rc geninfo_all_blocks=1 00:10:04.796 --rc geninfo_unexecuted_blocks=1 00:10:04.796 00:10:04.796 ' 00:10:04.796 06:43:57 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:04.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.796 --rc genhtml_branch_coverage=1 00:10:04.796 --rc genhtml_function_coverage=1 00:10:04.796 --rc genhtml_legend=1 00:10:04.796 --rc geninfo_all_blocks=1 00:10:04.796 --rc geninfo_unexecuted_blocks=1 00:10:04.796 00:10:04.796 ' 00:10:04.796 06:43:57 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:04.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.796 --rc genhtml_branch_coverage=1 00:10:04.796 --rc genhtml_function_coverage=1 00:10:04.796 --rc genhtml_legend=1 00:10:04.796 --rc geninfo_all_blocks=1 00:10:04.796 --rc geninfo_unexecuted_blocks=1 00:10:04.796 00:10:04.796 ' 00:10:04.796 06:43:57 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:04.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.796 --rc genhtml_branch_coverage=1 00:10:04.796 --rc genhtml_function_coverage=1 00:10:04.796 --rc genhtml_legend=1 00:10:04.796 --rc geninfo_all_blocks=1 00:10:04.796 --rc geninfo_unexecuted_blocks=1 00:10:04.796 00:10:04.796 ' 00:10:04.796 06:43:57 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:05.057 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:05.057 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:05.057 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:05.057 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:05.057 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:05.057 06:43:58 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:05.057 06:43:58 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:05.057 06:43:58 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:05.057 06:43:58 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:05.057 06:43:58 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:05.057 06:43:58 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:05.057 06:43:58 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:05.057 06:43:58 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:05.057 06:43:58 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:05.057 06:43:58 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:05.057 06:43:58 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:05.057 06:43:58 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:05.057 06:43:58 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:05.057 06:43:58 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:05.057 06:43:58 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:05.057 06:43:58 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:05.057 06:43:58 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:05.057 06:43:58 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:05.057 06:43:58 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:05.057 06:43:58 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:05.057 06:43:58 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:05.057 06:43:58 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:05.057 06:43:58 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:05.057 06:43:58 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:05.057 06:43:58 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:05.057 06:43:58 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:05.318 06:43:58 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:05.319 06:43:58 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:05.319 06:43:58 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:05.319 06:43:58 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:05.319 06:43:58 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:05.319 06:43:58 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:05.319 06:43:58 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:05.319 06:43:58 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:05.319 06:43:58 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:05.319 06:43:58 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:05.319 06:43:58 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:05.319 06:43:58 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:05.319 06:43:58 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:05.319 06:43:58 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:05.319 06:43:58 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:05.319 06:43:58 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:05.319 06:43:58 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:05.579 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:05.579 Waiting for block devices as requested 00:10:05.840 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:05.840 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:05.840 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:06.101 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:11.430 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:11.430 06:44:04 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:11.430 06:44:04 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:11.430 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:11.691 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:11.691 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:11.952 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:12.213 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:12.213 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:12.213 06:44:05 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:12.213 06:44:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:12.213 06:44:05 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:12.213 06:44:05 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:12.213 06:44:05 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=78474 00:10:12.213 06:44:05 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:12.213 06:44:05 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:12.213 06:44:05 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:12.213 06:44:05 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:12.213 06:44:05 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:12.213 06:44:05 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:12.213 06:44:05 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:12.213 06:44:05 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:12.213 06:44:05 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:10:12.213 06:44:05 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:12.213 06:44:05 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:12.213 06:44:05 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:12.213 06:44:05 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:12.213 06:44:05 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:12.474 Initializing NVMe Controllers 00:10:12.474 Attaching to 0000:00:10.0 00:10:12.474 Attaching to 0000:00:11.0 00:10:12.474 Attached to 0000:00:10.0 00:10:12.474 Attached to 0000:00:11.0 00:10:12.474 Initialization complete. Starting I/O... 00:10:12.474 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:12.474 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:12.474 00:10:13.418 QEMU NVMe Ctrl (12340 ): 2512 I/Os completed (+2512) 00:10:13.418 QEMU NVMe Ctrl (12341 ): 2512 I/Os completed (+2512) 00:10:13.418 00:10:14.362 QEMU NVMe Ctrl (12340 ): 5676 I/Os completed (+3164) 00:10:14.362 QEMU NVMe Ctrl (12341 ): 5678 I/Os completed (+3166) 00:10:14.362 00:10:15.748 QEMU NVMe Ctrl (12340 ): 8788 I/Os completed (+3112) 00:10:15.749 QEMU NVMe Ctrl (12341 ): 8790 I/Os completed (+3112) 00:10:15.749 00:10:16.697 QEMU NVMe Ctrl (12340 ): 11892 I/Os completed (+3104) 00:10:16.697 QEMU NVMe Ctrl (12341 ): 11899 I/Os completed (+3109) 00:10:16.697 00:10:17.640 QEMU NVMe Ctrl (12340 ): 15064 I/Os completed (+3172) 00:10:17.640 QEMU NVMe Ctrl (12341 ): 15071 I/Os completed (+3172) 00:10:17.640 00:10:18.212 06:44:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:18.212 06:44:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:18.212 06:44:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:18.212 [2024-11-18 06:44:11.231364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:18.212 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:18.212 [2024-11-18 06:44:11.232184] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.212 [2024-11-18 06:44:11.232218] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.212 [2024-11-18 06:44:11.232231] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.212 [2024-11-18 06:44:11.232244] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.212 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:18.212 [2024-11-18 06:44:11.233291] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.212 [2024-11-18 06:44:11.233321] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.212 [2024-11-18 06:44:11.233333] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.212 [2024-11-18 06:44:11.233345] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.212 06:44:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:18.212 06:44:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:18.212 [2024-11-18 06:44:11.253320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:18.212 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:18.212 [2024-11-18 06:44:11.254070] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.212 [2024-11-18 06:44:11.254103] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.212 [2024-11-18 06:44:11.254118] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.212 [2024-11-18 06:44:11.254129] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.212 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:18.212 [2024-11-18 06:44:11.254937] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.212 [2024-11-18 06:44:11.254991] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.212 [2024-11-18 06:44:11.255004] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.212 [2024-11-18 06:44:11.255014] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.212 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:18.212 EAL: Scan for (pci) bus failed. 00:10:18.212 06:44:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:18.212 06:44:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:18.472 06:44:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:18.472 06:44:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:18.472 06:44:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:18.472 06:44:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:18.472 06:44:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:18.472 06:44:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:18.472 06:44:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:18.472 06:44:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:18.472 Attaching to 0000:00:10.0 00:10:18.472 Attached to 0000:00:10.0 00:10:18.472 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:18.472 00:10:18.472 06:44:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:18.472 06:44:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:18.473 06:44:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:18.473 Attaching to 0000:00:11.0 00:10:18.473 Attached to 0000:00:11.0 00:10:19.416 QEMU NVMe Ctrl (12340 ): 4681 I/Os completed (+4681) 00:10:19.416 QEMU NVMe Ctrl (12341 ): 4356 I/Os completed (+4356) 00:10:19.416 00:10:20.359 QEMU NVMe Ctrl (12340 ): 8964 I/Os completed (+4283) 00:10:20.360 QEMU NVMe Ctrl (12341 ): 8544 I/Os completed (+4188) 00:10:20.360 00:10:21.747 QEMU NVMe Ctrl (12340 ): 12367 I/Os completed (+3403) 00:10:21.747 QEMU NVMe Ctrl (12341 ): 11971 I/Os completed (+3427) 00:10:21.747 00:10:22.691 QEMU NVMe Ctrl (12340 ): 15927 I/Os completed (+3560) 00:10:22.691 QEMU NVMe Ctrl (12341 ): 15472 I/Os completed (+3501) 00:10:22.691 00:10:23.672 QEMU NVMe Ctrl (12340 ): 20623 I/Os completed (+4696) 00:10:23.672 QEMU NVMe Ctrl (12341 ): 20148 I/Os completed (+4676) 00:10:23.672 00:10:24.614 QEMU NVMe Ctrl (12340 ): 24969 I/Os completed (+4346) 00:10:24.614 QEMU NVMe Ctrl (12341 ): 24440 I/Os completed (+4292) 00:10:24.614 00:10:25.558 QEMU NVMe Ctrl (12340 ): 29281 I/Os completed (+4312) 00:10:25.558 QEMU NVMe Ctrl (12341 ): 28684 I/Os completed (+4244) 00:10:25.558 00:10:26.498 QEMU NVMe Ctrl (12340 ): 33470 I/Os completed (+4189) 00:10:26.498 QEMU NVMe Ctrl (12341 ): 32829 I/Os completed (+4145) 00:10:26.498 00:10:27.441 QEMU NVMe Ctrl (12340 ): 37770 I/Os completed (+4300) 00:10:27.441 QEMU NVMe Ctrl (12341 ): 37074 I/Os completed (+4245) 00:10:27.441 00:10:28.383 QEMU NVMe Ctrl (12340 ): 41901 I/Os completed (+4131) 00:10:28.383 QEMU NVMe Ctrl (12341 ): 41159 I/Os completed (+4085) 00:10:28.383 00:10:29.768 QEMU NVMe Ctrl (12340 ): 45205 I/Os completed (+3304) 00:10:29.768 QEMU NVMe Ctrl (12341 ): 44463 I/Os completed (+3304) 00:10:29.768 00:10:30.712 QEMU NVMe Ctrl (12340 ): 48877 I/Os completed (+3672) 00:10:30.712 QEMU NVMe Ctrl (12341 ): 48114 I/Os completed (+3651) 00:10:30.712 00:10:30.712 06:44:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:30.712 06:44:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:30.712 06:44:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:30.712 06:44:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:30.712 [2024-11-18 06:44:23.501917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:30.712 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:30.712 [2024-11-18 06:44:23.502707] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.712 [2024-11-18 06:44:23.502751] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.712 [2024-11-18 06:44:23.502765] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.712 [2024-11-18 06:44:23.502781] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.712 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:30.712 [2024-11-18 06:44:23.503758] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.712 [2024-11-18 06:44:23.503791] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.712 [2024-11-18 06:44:23.503801] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.712 [2024-11-18 06:44:23.503815] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.712 06:44:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:30.712 06:44:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:30.712 [2024-11-18 06:44:23.523281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:30.712 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:30.712 [2024-11-18 06:44:23.524018] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.712 [2024-11-18 06:44:23.524049] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.712 [2024-11-18 06:44:23.524063] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.712 [2024-11-18 06:44:23.524075] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.712 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:30.712 [2024-11-18 06:44:23.524871] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.712 [2024-11-18 06:44:23.524898] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.712 [2024-11-18 06:44:23.524910] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.712 [2024-11-18 06:44:23.524920] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.712 06:44:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:30.712 06:44:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:30.712 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:30.712 EAL: Scan for (pci) bus failed. 00:10:30.712 06:44:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:30.712 06:44:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:30.712 06:44:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:30.712 06:44:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:30.712 06:44:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:30.712 06:44:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:30.712 06:44:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:30.712 06:44:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:30.712 Attaching to 0000:00:10.0 00:10:30.712 Attached to 0000:00:10.0 00:10:30.712 06:44:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:30.712 06:44:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:30.712 06:44:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:30.712 Attaching to 0000:00:11.0 00:10:30.712 Attached to 0000:00:11.0 00:10:31.657 QEMU NVMe Ctrl (12340 ): 3142 I/Os completed (+3142) 00:10:31.657 QEMU NVMe Ctrl (12341 ): 2760 I/Os completed (+2760) 00:10:31.657 00:10:32.599 QEMU NVMe Ctrl (12340 ): 7338 I/Os completed (+4196) 00:10:32.599 QEMU NVMe Ctrl (12341 ): 6910 I/Os completed (+4150) 00:10:32.599 00:10:33.542 QEMU NVMe Ctrl (12340 ): 11794 I/Os completed (+4456) 00:10:33.542 QEMU NVMe Ctrl (12341 ): 11382 I/Os completed (+4472) 00:10:33.542 00:10:34.485 QEMU NVMe Ctrl (12340 ): 15315 I/Os completed (+3521) 00:10:34.485 QEMU NVMe Ctrl (12341 ): 14884 I/Os completed (+3502) 00:10:34.485 00:10:35.429 QEMU NVMe Ctrl (12340 ): 18262 I/Os completed (+2947) 00:10:35.429 QEMU NVMe Ctrl (12341 ): 17843 I/Os completed (+2959) 00:10:35.429 00:10:36.391 QEMU NVMe Ctrl (12340 ): 21290 I/Os completed (+3028) 00:10:36.391 QEMU NVMe Ctrl (12341 ): 20913 I/Os completed (+3070) 00:10:36.391 00:10:37.381 QEMU NVMe Ctrl (12340 ): 24527 I/Os completed (+3237) 00:10:37.381 QEMU NVMe Ctrl (12341 ): 24160 I/Os completed (+3247) 00:10:37.381 00:10:38.767 QEMU NVMe Ctrl (12340 ): 28777 I/Os completed (+4250) 00:10:38.767 QEMU NVMe Ctrl (12341 ): 28379 I/Os completed (+4219) 00:10:38.767 00:10:39.710 QEMU NVMe Ctrl (12340 ): 33456 I/Os completed (+4679) 00:10:39.710 QEMU NVMe Ctrl (12341 ): 33064 I/Os completed (+4685) 00:10:39.710 00:10:40.654 QEMU NVMe Ctrl (12340 ): 37701 I/Os completed (+4245) 00:10:40.654 QEMU NVMe Ctrl (12341 ): 37238 I/Os completed (+4174) 00:10:40.654 00:10:41.614 QEMU NVMe Ctrl (12340 ): 41474 I/Os completed (+3773) 00:10:41.614 QEMU NVMe Ctrl (12341 ): 40996 I/Os completed (+3758) 00:10:41.614 00:10:42.556 QEMU NVMe Ctrl (12340 ): 45284 I/Os completed (+3810) 00:10:42.556 QEMU NVMe Ctrl (12341 ): 44752 I/Os completed (+3756) 00:10:42.556 00:10:42.817 06:44:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:42.817 06:44:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:42.817 06:44:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:42.817 06:44:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:42.817 [2024-11-18 06:44:35.766169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:42.817 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:42.817 [2024-11-18 06:44:35.767024] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:42.817 [2024-11-18 06:44:35.767060] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:42.817 [2024-11-18 06:44:35.767073] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:42.817 [2024-11-18 06:44:35.767088] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:42.817 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:42.817 [2024-11-18 06:44:35.768173] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:42.817 [2024-11-18 06:44:35.768206] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:42.817 [2024-11-18 06:44:35.768217] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:42.817 [2024-11-18 06:44:35.768227] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:42.817 06:44:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:42.817 06:44:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:42.817 [2024-11-18 06:44:35.791364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:42.817 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:42.817 [2024-11-18 06:44:35.792104] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:42.817 [2024-11-18 06:44:35.792135] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:42.817 [2024-11-18 06:44:35.792150] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:42.817 [2024-11-18 06:44:35.792162] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:42.817 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:42.817 [2024-11-18 06:44:35.793008] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:42.817 [2024-11-18 06:44:35.793037] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:42.817 [2024-11-18 06:44:35.793049] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:42.817 [2024-11-18 06:44:35.793059] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:42.817 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:42.817 EAL: Scan for (pci) bus failed. 00:10:42.817 06:44:35 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:42.817 06:44:35 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:42.817 06:44:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:42.817 06:44:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:42.817 06:44:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:43.079 06:44:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:43.079 06:44:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:43.079 06:44:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:43.079 06:44:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:43.079 06:44:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:43.079 Attaching to 0000:00:10.0 00:10:43.079 Attached to 0000:00:10.0 00:10:43.079 06:44:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:43.079 06:44:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:43.079 06:44:36 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:43.079 Attaching to 0000:00:11.0 00:10:43.079 Attached to 0000:00:11.0 00:10:43.079 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:43.079 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:43.079 [2024-11-18 06:44:36.040662] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:55.315 06:44:48 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:55.315 06:44:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:55.315 06:44:48 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.81 00:10:55.315 06:44:48 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.81 00:10:55.315 06:44:48 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:55.315 06:44:48 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.81 00:10:55.315 06:44:48 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.81 2 00:10:55.315 remove_attach_helper took 42.81s to complete (handling 2 nvme drive(s)) 06:44:48 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:01.906 06:44:54 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 78474 00:11:01.906 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (78474) - No such process 00:11:01.906 06:44:54 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 78474 00:11:01.906 06:44:54 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:01.906 06:44:54 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:01.906 06:44:54 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:01.906 06:44:54 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=79016 00:11:01.906 06:44:54 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:01.906 06:44:54 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:01.906 06:44:54 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 79016 00:11:01.906 06:44:54 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 79016 ']' 00:11:01.906 06:44:54 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.906 06:44:54 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.907 06:44:54 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.907 06:44:54 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.907 06:44:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:01.907 [2024-11-18 06:44:54.135850] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:11:01.907 [2024-11-18 06:44:54.136036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79016 ] 00:11:01.907 [2024-11-18 06:44:54.296319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.907 [2024-11-18 06:44:54.325010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.907 06:44:54 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.907 06:44:54 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:11:01.907 06:44:54 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:01.907 06:44:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.907 06:44:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:02.167 06:44:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.167 06:44:54 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:02.167 06:44:54 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:02.167 06:44:54 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:02.167 06:44:54 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:02.167 06:44:54 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:02.167 06:44:54 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:02.167 06:44:54 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:02.167 06:44:54 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:02.167 06:44:54 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:02.167 06:44:54 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:02.167 06:44:54 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:02.167 06:44:54 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:02.168 06:44:54 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:08.870 06:45:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:08.870 06:45:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:08.870 06:45:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:08.870 06:45:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:08.870 06:45:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:08.870 06:45:01 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:08.870 06:45:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:08.870 06:45:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:08.870 06:45:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:08.870 06:45:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:08.870 06:45:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.870 06:45:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:08.870 06:45:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:08.870 06:45:01 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.870 06:45:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:08.870 06:45:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:08.870 [2024-11-18 06:45:01.089255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:08.870 [2024-11-18 06:45:01.090329] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.870 [2024-11-18 06:45:01.090369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.870 [2024-11-18 06:45:01.090382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.870 [2024-11-18 06:45:01.090396] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.870 [2024-11-18 06:45:01.090407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.870 [2024-11-18 06:45:01.090414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.870 [2024-11-18 06:45:01.090423] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.870 [2024-11-18 06:45:01.090429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.870 [2024-11-18 06:45:01.090437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.870 [2024-11-18 06:45:01.090444] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.870 [2024-11-18 06:45:01.090451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.870 [2024-11-18 06:45:01.090458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.870 [2024-11-18 06:45:01.489246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:08.870 [2024-11-18 06:45:01.490307] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.870 [2024-11-18 06:45:01.490338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.870 [2024-11-18 06:45:01.490348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.870 [2024-11-18 06:45:01.490367] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.870 [2024-11-18 06:45:01.490374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.870 [2024-11-18 06:45:01.490384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.870 [2024-11-18 06:45:01.490390] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.870 [2024-11-18 06:45:01.490398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.870 [2024-11-18 06:45:01.490404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.870 [2024-11-18 06:45:01.490413] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.870 [2024-11-18 06:45:01.490419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.870 [2024-11-18 06:45:01.490427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.870 06:45:01 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:08.870 06:45:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:08.870 06:45:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:08.870 06:45:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:08.870 06:45:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:08.870 06:45:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:08.870 06:45:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.870 06:45:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:08.870 06:45:01 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.870 06:45:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:08.870 06:45:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:08.870 06:45:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:08.870 06:45:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:08.870 06:45:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:08.870 06:45:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:08.870 06:45:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:08.870 06:45:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:08.870 06:45:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:08.870 06:45:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:08.870 06:45:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:08.870 06:45:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:08.870 06:45:01 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:21.110 06:45:13 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:21.110 06:45:13 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:21.110 06:45:13 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:21.110 06:45:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:21.110 06:45:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:21.110 06:45:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:21.110 06:45:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.110 06:45:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:21.110 06:45:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.110 06:45:13 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:21.110 06:45:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:21.110 06:45:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:21.110 06:45:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:21.110 [2024-11-18 06:45:13.889357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:21.110 [2024-11-18 06:45:13.890612] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.110 [2024-11-18 06:45:13.890643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.110 [2024-11-18 06:45:13.890655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.110 [2024-11-18 06:45:13.890666] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.110 [2024-11-18 06:45:13.890674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.110 [2024-11-18 06:45:13.890681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.110 [2024-11-18 06:45:13.890689] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.110 [2024-11-18 06:45:13.890695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.110 [2024-11-18 06:45:13.890703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.110 [2024-11-18 06:45:13.890710] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.110 [2024-11-18 06:45:13.890717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.110 [2024-11-18 06:45:13.890723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.110 06:45:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:21.110 06:45:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:21.110 06:45:13 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:21.110 06:45:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:21.110 06:45:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:21.110 06:45:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:21.110 06:45:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:21.110 06:45:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.110 06:45:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:21.110 06:45:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:21.110 06:45:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.110 06:45:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:21.110 06:45:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:21.371 [2024-11-18 06:45:14.389362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:21.371 [2024-11-18 06:45:14.390384] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.371 [2024-11-18 06:45:14.390417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.371 [2024-11-18 06:45:14.390427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.371 [2024-11-18 06:45:14.390438] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.371 [2024-11-18 06:45:14.390446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.371 [2024-11-18 06:45:14.390454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.371 [2024-11-18 06:45:14.390460] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.371 [2024-11-18 06:45:14.390467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.371 [2024-11-18 06:45:14.390474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.371 [2024-11-18 06:45:14.390482] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.371 [2024-11-18 06:45:14.390488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.371 [2024-11-18 06:45:14.390496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.371 06:45:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:21.371 06:45:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:21.371 06:45:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:21.371 06:45:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:21.371 06:45:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:21.371 06:45:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:21.371 06:45:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.371 06:45:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:21.371 06:45:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.632 06:45:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:21.632 06:45:14 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:21.632 06:45:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:21.632 06:45:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:21.632 06:45:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:21.632 06:45:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:21.632 06:45:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:21.632 06:45:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:21.632 06:45:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:21.632 06:45:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:21.632 06:45:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:21.893 06:45:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:21.893 06:45:14 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:34.120 06:45:26 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:34.120 06:45:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:34.120 06:45:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:34.120 06:45:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:34.120 06:45:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:34.120 06:45:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:34.120 06:45:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.120 06:45:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:34.120 06:45:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.120 06:45:26 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:34.120 06:45:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:34.120 06:45:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:34.120 06:45:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:34.120 06:45:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:34.120 06:45:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:34.120 [2024-11-18 06:45:26.789478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:34.120 [2024-11-18 06:45:26.790576] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.120 [2024-11-18 06:45:26.790605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.120 [2024-11-18 06:45:26.790618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.120 [2024-11-18 06:45:26.790629] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.120 [2024-11-18 06:45:26.790637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.120 [2024-11-18 06:45:26.790644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.120 [2024-11-18 06:45:26.790652] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.120 [2024-11-18 06:45:26.790658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.120 [2024-11-18 06:45:26.790666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.120 [2024-11-18 06:45:26.790672] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.120 [2024-11-18 06:45:26.790679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.120 [2024-11-18 06:45:26.790686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.120 06:45:26 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:34.120 06:45:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:34.120 06:45:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:34.120 06:45:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:34.120 06:45:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:34.120 06:45:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:34.120 06:45:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.120 06:45:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:34.120 06:45:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.120 06:45:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:34.120 06:45:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:34.120 [2024-11-18 06:45:27.189479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:34.120 [2024-11-18 06:45:27.190518] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.121 [2024-11-18 06:45:27.190551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.121 [2024-11-18 06:45:27.190564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.121 [2024-11-18 06:45:27.190574] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.121 [2024-11-18 06:45:27.190582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.121 [2024-11-18 06:45:27.190591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.121 [2024-11-18 06:45:27.190598] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.121 [2024-11-18 06:45:27.190605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.121 [2024-11-18 06:45:27.190612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.121 [2024-11-18 06:45:27.190619] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.121 [2024-11-18 06:45:27.190625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.121 [2024-11-18 06:45:27.190633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.382 06:45:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:34.382 06:45:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:34.382 06:45:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:34.382 06:45:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:34.382 06:45:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:34.382 06:45:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:34.382 06:45:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.382 06:45:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:34.382 06:45:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.382 06:45:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:34.382 06:45:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:34.382 06:45:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:34.382 06:45:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:34.382 06:45:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:34.643 06:45:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:34.643 06:45:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:34.643 06:45:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:34.643 06:45:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:34.643 06:45:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:34.643 06:45:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:34.643 06:45:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:34.643 06:45:27 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:46.879 06:45:39 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:46.879 06:45:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:46.879 06:45:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:46.879 06:45:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:46.879 06:45:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:46.879 06:45:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:46.879 06:45:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.879 06:45:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:46.879 06:45:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.879 06:45:39 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:46.879 06:45:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:46.879 06:45:39 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.64 00:11:46.879 06:45:39 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.64 00:11:46.879 06:45:39 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:46.879 06:45:39 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.64 00:11:46.879 06:45:39 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.64 2 00:11:46.879 remove_attach_helper took 44.64s to complete (handling 2 nvme drive(s)) 06:45:39 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:46.879 06:45:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.879 06:45:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:46.880 06:45:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.880 06:45:39 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:46.880 06:45:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.880 06:45:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:46.880 06:45:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.880 06:45:39 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:46.880 06:45:39 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:46.880 06:45:39 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:46.880 06:45:39 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:46.880 06:45:39 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:46.880 06:45:39 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:46.880 06:45:39 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:46.880 06:45:39 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:46.880 06:45:39 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:46.880 06:45:39 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:46.880 06:45:39 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:46.880 06:45:39 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:46.880 06:45:39 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:53.562 06:45:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:53.562 06:45:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:53.562 06:45:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:53.562 06:45:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:53.562 06:45:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:53.562 06:45:45 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:53.562 06:45:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:53.562 06:45:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:53.562 06:45:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:53.562 06:45:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:53.562 06:45:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.562 06:45:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:53.562 06:45:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:53.562 06:45:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.562 06:45:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:53.562 06:45:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:53.562 [2024-11-18 06:45:45.760382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:53.562 [2024-11-18 06:45:45.761170] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.562 [2024-11-18 06:45:45.761195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.562 [2024-11-18 06:45:45.761207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.562 [2024-11-18 06:45:45.761218] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.562 [2024-11-18 06:45:45.761226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.562 [2024-11-18 06:45:45.761233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.562 [2024-11-18 06:45:45.761241] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.562 [2024-11-18 06:45:45.761248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.562 [2024-11-18 06:45:45.761259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.562 [2024-11-18 06:45:45.761265] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.562 [2024-11-18 06:45:45.761272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.563 [2024-11-18 06:45:45.761279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.563 [2024-11-18 06:45:46.160383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:53.563 [2024-11-18 06:45:46.161120] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.563 [2024-11-18 06:45:46.161149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.563 [2024-11-18 06:45:46.161159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.563 [2024-11-18 06:45:46.161170] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.563 [2024-11-18 06:45:46.161177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.563 [2024-11-18 06:45:46.161185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.563 [2024-11-18 06:45:46.161192] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.563 [2024-11-18 06:45:46.161199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.563 [2024-11-18 06:45:46.161206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.563 [2024-11-18 06:45:46.161213] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.563 [2024-11-18 06:45:46.161220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.563 [2024-11-18 06:45:46.161229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.563 06:45:46 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:53.563 06:45:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:53.563 06:45:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:53.563 06:45:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:53.563 06:45:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:53.563 06:45:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:53.563 06:45:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.563 06:45:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:53.563 06:45:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.563 06:45:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:53.563 06:45:46 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:53.563 06:45:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:53.563 06:45:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:53.563 06:45:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:53.563 06:45:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:53.563 06:45:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:53.563 06:45:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:53.563 06:45:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:53.563 06:45:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:53.563 06:45:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:53.563 06:45:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:53.563 06:45:46 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:05.800 06:45:58 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:05.800 06:45:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:05.800 06:45:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:05.800 06:45:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:05.800 06:45:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:05.800 06:45:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:05.800 06:45:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.800 06:45:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:05.800 06:45:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.800 06:45:58 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:05.800 06:45:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:05.800 06:45:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:05.800 06:45:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:05.800 [2024-11-18 06:45:58.560513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:05.800 [2024-11-18 06:45:58.561401] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.800 [2024-11-18 06:45:58.561426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.800 [2024-11-18 06:45:58.561437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.800 [2024-11-18 06:45:58.561450] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.800 [2024-11-18 06:45:58.561459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.800 [2024-11-18 06:45:58.561465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.800 [2024-11-18 06:45:58.561474] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.800 [2024-11-18 06:45:58.561481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.800 [2024-11-18 06:45:58.561488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.800 [2024-11-18 06:45:58.561494] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.800 [2024-11-18 06:45:58.561502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.800 [2024-11-18 06:45:58.561508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.800 06:45:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:05.800 06:45:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:05.800 06:45:58 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:05.800 06:45:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:05.800 06:45:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:05.800 06:45:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:05.800 06:45:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:05.800 06:45:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:05.800 06:45:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.800 06:45:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:05.800 06:45:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.800 06:45:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:05.800 06:45:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:06.061 [2024-11-18 06:45:58.960513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:06.061 [2024-11-18 06:45:58.961262] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.061 [2024-11-18 06:45:58.961295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.061 [2024-11-18 06:45:58.961304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.061 [2024-11-18 06:45:58.961317] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.061 [2024-11-18 06:45:58.961325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.061 [2024-11-18 06:45:58.961333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.061 [2024-11-18 06:45:58.961339] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.061 [2024-11-18 06:45:58.961347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.061 [2024-11-18 06:45:58.961354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.061 [2024-11-18 06:45:58.961361] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.061 [2024-11-18 06:45:58.961367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.061 [2024-11-18 06:45:58.961375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.061 06:45:59 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:06.062 06:45:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:06.062 06:45:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:06.062 06:45:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:06.062 06:45:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:06.062 06:45:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:06.062 06:45:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.062 06:45:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:06.062 06:45:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.323 06:45:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:06.323 06:45:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:06.323 06:45:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:06.323 06:45:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:06.323 06:45:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:06.323 06:45:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:06.323 06:45:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:06.323 06:45:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:06.323 06:45:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:06.323 06:45:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:06.323 06:45:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:06.323 06:45:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:06.323 06:45:59 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:18.560 06:46:11 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:18.560 06:46:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:18.560 06:46:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:18.560 06:46:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:18.560 06:46:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:18.560 06:46:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.560 06:46:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:18.560 06:46:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:18.560 06:46:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.560 06:46:11 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:18.560 06:46:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:18.560 06:46:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:18.560 06:46:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:18.560 06:46:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:18.560 06:46:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:18.560 06:46:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:18.560 06:46:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:18.560 06:46:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:18.560 06:46:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:18.560 06:46:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:18.560 06:46:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:18.560 06:46:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.560 06:46:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:18.560 [2024-11-18 06:46:11.460655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:18.560 [2024-11-18 06:46:11.461430] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.560 [2024-11-18 06:46:11.461455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.560 [2024-11-18 06:46:11.461468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.560 [2024-11-18 06:46:11.461480] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.560 [2024-11-18 06:46:11.461492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.560 [2024-11-18 06:46:11.461499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.560 [2024-11-18 06:46:11.461507] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.560 [2024-11-18 06:46:11.461514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.560 [2024-11-18 06:46:11.461522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.560 [2024-11-18 06:46:11.461528] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.560 [2024-11-18 06:46:11.461536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.560 [2024-11-18 06:46:11.461542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.560 06:46:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.560 06:46:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:18.560 06:46:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:19.132 [2024-11-18 06:46:11.960658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:19.132 [2024-11-18 06:46:11.961398] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.132 [2024-11-18 06:46:11.961428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.132 [2024-11-18 06:46:11.961438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.132 [2024-11-18 06:46:11.961448] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.132 [2024-11-18 06:46:11.961456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.132 [2024-11-18 06:46:11.961464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.132 [2024-11-18 06:46:11.961471] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.132 [2024-11-18 06:46:11.961480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.132 [2024-11-18 06:46:11.961487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.132 [2024-11-18 06:46:11.961495] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.132 [2024-11-18 06:46:11.961501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.132 [2024-11-18 06:46:11.961508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.132 06:46:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:19.132 06:46:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:19.132 06:46:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:19.132 06:46:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:19.132 06:46:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:19.132 06:46:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:19.132 06:46:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.132 06:46:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:19.132 06:46:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.132 06:46:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:19.132 06:46:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:19.132 06:46:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:19.132 06:46:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:19.132 06:46:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:19.132 06:46:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:19.132 06:46:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:19.132 06:46:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:19.132 06:46:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:19.132 06:46:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:19.394 06:46:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:19.394 06:46:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:19.394 06:46:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:31.626 06:46:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:31.626 06:46:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:31.626 06:46:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:31.626 06:46:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:31.626 06:46:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:31.626 06:46:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:31.626 06:46:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.626 06:46:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:31.626 06:46:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.626 06:46:24 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:31.626 06:46:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:31.626 06:46:24 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.62 00:12:31.626 06:46:24 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.62 00:12:31.626 06:46:24 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:31.626 06:46:24 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.62 00:12:31.626 06:46:24 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.62 2 00:12:31.626 remove_attach_helper took 44.62s to complete (handling 2 nvme drive(s)) 06:46:24 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:31.626 06:46:24 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 79016 00:12:31.626 06:46:24 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 79016 ']' 00:12:31.626 06:46:24 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 79016 00:12:31.626 06:46:24 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:31.626 06:46:24 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:31.626 06:46:24 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79016 00:12:31.626 06:46:24 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:31.626 06:46:24 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:31.626 killing process with pid 79016 00:12:31.626 06:46:24 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79016' 00:12:31.626 06:46:24 sw_hotplug -- common/autotest_common.sh@973 -- # kill 79016 00:12:31.626 06:46:24 sw_hotplug -- common/autotest_common.sh@978 -- # wait 79016 00:12:31.626 06:46:24 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:31.888 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:32.459 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:32.460 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:32.460 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:32.460 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:32.460 00:12:32.460 real 2m27.962s 00:12:32.460 user 1m48.064s 00:12:32.460 sys 0m18.264s 00:12:32.460 06:46:25 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.460 06:46:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:32.460 ************************************ 00:12:32.460 END TEST sw_hotplug 00:12:32.460 ************************************ 00:12:32.460 06:46:25 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:32.460 06:46:25 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:32.460 06:46:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:32.460 06:46:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.460 06:46:25 -- common/autotest_common.sh@10 -- # set +x 00:12:32.460 ************************************ 00:12:32.460 START TEST nvme_xnvme 00:12:32.460 ************************************ 00:12:32.460 06:46:25 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:32.722 * Looking for test storage... 00:12:32.722 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:32.722 06:46:25 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:32.722 06:46:25 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:32.722 06:46:25 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:32.722 06:46:25 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:32.722 06:46:25 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:32.722 06:46:25 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:32.722 06:46:25 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:32.722 06:46:25 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:32.722 06:46:25 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:32.722 06:46:25 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:32.722 06:46:25 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:32.722 06:46:25 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:32.722 06:46:25 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:32.722 06:46:25 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:32.722 06:46:25 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:32.722 06:46:25 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:32.722 06:46:25 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:32.722 06:46:25 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:32.722 06:46:25 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:32.722 06:46:25 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:32.722 06:46:25 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:32.722 06:46:25 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:32.722 06:46:25 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:32.722 06:46:25 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:32.722 06:46:25 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:32.722 06:46:25 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:32.722 06:46:25 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:32.722 06:46:25 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:32.722 06:46:25 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:32.722 06:46:25 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:32.722 06:46:25 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:32.722 06:46:25 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:32.722 06:46:25 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:32.722 06:46:25 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:32.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.722 --rc genhtml_branch_coverage=1 00:12:32.722 --rc genhtml_function_coverage=1 00:12:32.722 --rc genhtml_legend=1 00:12:32.722 --rc geninfo_all_blocks=1 00:12:32.722 --rc geninfo_unexecuted_blocks=1 00:12:32.722 00:12:32.722 ' 00:12:32.722 06:46:25 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:32.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.722 --rc genhtml_branch_coverage=1 00:12:32.722 --rc genhtml_function_coverage=1 00:12:32.722 --rc genhtml_legend=1 00:12:32.722 --rc geninfo_all_blocks=1 00:12:32.722 --rc geninfo_unexecuted_blocks=1 00:12:32.722 00:12:32.722 ' 00:12:32.722 06:46:25 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:32.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.722 --rc genhtml_branch_coverage=1 00:12:32.722 --rc genhtml_function_coverage=1 00:12:32.722 --rc genhtml_legend=1 00:12:32.722 --rc geninfo_all_blocks=1 00:12:32.722 --rc geninfo_unexecuted_blocks=1 00:12:32.722 00:12:32.722 ' 00:12:32.722 06:46:25 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:32.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.722 --rc genhtml_branch_coverage=1 00:12:32.722 --rc genhtml_function_coverage=1 00:12:32.722 --rc genhtml_legend=1 00:12:32.722 --rc geninfo_all_blocks=1 00:12:32.722 --rc geninfo_unexecuted_blocks=1 00:12:32.722 00:12:32.722 ' 00:12:32.722 06:46:25 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:32.722 06:46:25 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:32.722 06:46:25 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.722 06:46:25 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.722 06:46:25 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.722 06:46:25 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.722 06:46:25 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.722 06:46:25 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.722 06:46:25 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:32.722 06:46:25 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.722 06:46:25 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:32.722 06:46:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:32.722 06:46:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.722 06:46:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:32.722 ************************************ 00:12:32.722 START TEST xnvme_to_malloc_dd_copy 00:12:32.722 ************************************ 00:12:32.722 06:46:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1129 -- # malloc_to_xnvme_copy 00:12:32.722 06:46:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:32.722 06:46:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:32.722 06:46:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:32.722 06:46:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:32.722 06:46:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:32.722 06:46:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:32.722 06:46:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:32.722 06:46:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:32.723 06:46:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:32.723 06:46:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:32.723 06:46:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:32.723 06:46:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:32.723 06:46:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:32.723 06:46:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:32.723 06:46:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:32.723 06:46:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:32.723 06:46:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:32.723 06:46:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:32.723 06:46:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:32.723 06:46:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:32.723 06:46:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:32.723 06:46:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:32.723 06:46:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:32.723 06:46:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:32.723 { 00:12:32.723 "subsystems": [ 00:12:32.723 { 00:12:32.723 "subsystem": "bdev", 00:12:32.723 "config": [ 00:12:32.723 { 00:12:32.723 "params": { 00:12:32.723 "block_size": 512, 00:12:32.723 "num_blocks": 2097152, 00:12:32.723 "name": "malloc0" 00:12:32.723 }, 00:12:32.723 "method": "bdev_malloc_create" 00:12:32.723 }, 00:12:32.723 { 00:12:32.723 "params": { 00:12:32.723 "io_mechanism": "libaio", 00:12:32.723 "filename": "/dev/nullb0", 00:12:32.723 "name": "null0" 00:12:32.723 }, 00:12:32.723 "method": "bdev_xnvme_create" 00:12:32.723 }, 00:12:32.723 { 00:12:32.723 "method": "bdev_wait_for_examine" 00:12:32.723 } 00:12:32.723 ] 00:12:32.723 } 00:12:32.723 ] 00:12:32.723 } 00:12:32.984 [2024-11-18 06:46:25.805502] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:12:32.984 [2024-11-18 06:46:25.805644] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80367 ] 00:12:32.984 [2024-11-18 06:46:25.968708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.984 [2024-11-18 06:46:26.000107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.371  [2024-11-18T06:46:28.400Z] Copying: 223/1024 [MB] (223 MBps) [2024-11-18T06:46:29.783Z] Copying: 448/1024 [MB] (224 MBps) [2024-11-18T06:46:30.356Z] Copying: 700/1024 [MB] (252 MBps) [2024-11-18T06:46:30.618Z] Copying: 1007/1024 [MB] (306 MBps) [2024-11-18T06:46:30.878Z] Copying: 1024/1024 [MB] (average 252 MBps) 00:12:37.791 00:12:37.791 06:46:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:37.791 06:46:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:37.791 06:46:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:37.791 06:46:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:37.791 { 00:12:37.791 "subsystems": [ 00:12:37.791 { 00:12:37.791 "subsystem": "bdev", 00:12:37.791 "config": [ 00:12:37.791 { 00:12:37.791 "params": { 00:12:37.791 "block_size": 512, 00:12:37.791 "num_blocks": 2097152, 00:12:37.791 "name": "malloc0" 00:12:37.791 }, 00:12:37.791 "method": "bdev_malloc_create" 00:12:37.791 }, 00:12:37.791 { 00:12:37.791 "params": { 00:12:37.791 "io_mechanism": "libaio", 00:12:37.791 "filename": "/dev/nullb0", 00:12:37.791 "name": "null0" 00:12:37.791 }, 00:12:37.791 "method": "bdev_xnvme_create" 00:12:37.791 }, 00:12:37.791 { 00:12:37.791 "method": "bdev_wait_for_examine" 00:12:37.791 } 00:12:37.791 ] 00:12:37.791 } 00:12:37.791 ] 00:12:37.791 } 00:12:37.791 [2024-11-18 06:46:30.784567] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:12:37.792 [2024-11-18 06:46:30.784681] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80436 ] 00:12:38.053 [2024-11-18 06:46:30.936372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.053 [2024-11-18 06:46:30.955291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.440  [2024-11-18T06:46:33.470Z] Copying: 308/1024 [MB] (308 MBps) [2024-11-18T06:46:34.414Z] Copying: 618/1024 [MB] (309 MBps) [2024-11-18T06:46:34.674Z] Copying: 929/1024 [MB] (311 MBps) [2024-11-18T06:46:34.935Z] Copying: 1024/1024 [MB] (average 310 MBps) 00:12:41.848 00:12:41.848 06:46:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:41.848 06:46:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:41.848 06:46:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:41.848 06:46:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:41.848 06:46:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:41.848 06:46:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:41.848 { 00:12:41.848 "subsystems": [ 00:12:41.848 { 00:12:41.848 "subsystem": "bdev", 00:12:41.848 "config": [ 00:12:41.848 { 00:12:41.848 "params": { 00:12:41.848 "block_size": 512, 00:12:41.848 "num_blocks": 2097152, 00:12:41.848 "name": "malloc0" 00:12:41.848 }, 00:12:41.848 "method": "bdev_malloc_create" 00:12:41.848 }, 00:12:41.848 { 00:12:41.848 "params": { 00:12:41.848 "io_mechanism": "io_uring", 00:12:41.848 "filename": "/dev/nullb0", 00:12:41.848 "name": "null0" 00:12:41.848 }, 00:12:41.848 "method": "bdev_xnvme_create" 00:12:41.848 }, 00:12:41.848 { 00:12:41.848 "method": "bdev_wait_for_examine" 00:12:41.848 } 00:12:41.848 ] 00:12:41.848 } 00:12:41.848 ] 00:12:41.848 } 00:12:41.848 [2024-11-18 06:46:34.861241] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:12:41.848 [2024-11-18 06:46:34.861353] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80485 ] 00:12:42.109 [2024-11-18 06:46:35.015499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.109 [2024-11-18 06:46:35.032726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.494  [2024-11-18T06:46:37.524Z] Copying: 317/1024 [MB] (317 MBps) [2024-11-18T06:46:38.466Z] Copying: 635/1024 [MB] (318 MBps) [2024-11-18T06:46:38.728Z] Copying: 954/1024 [MB] (319 MBps) [2024-11-18T06:46:38.989Z] Copying: 1024/1024 [MB] (average 318 MBps) 00:12:45.902 00:12:45.902 06:46:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:45.902 06:46:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:45.902 06:46:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:45.902 06:46:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:45.902 { 00:12:45.902 "subsystems": [ 00:12:45.902 { 00:12:45.902 "subsystem": "bdev", 00:12:45.902 "config": [ 00:12:45.902 { 00:12:45.902 "params": { 00:12:45.902 "block_size": 512, 00:12:45.902 "num_blocks": 2097152, 00:12:45.902 "name": "malloc0" 00:12:45.902 }, 00:12:45.902 "method": "bdev_malloc_create" 00:12:45.902 }, 00:12:45.902 { 00:12:45.902 "params": { 00:12:45.902 "io_mechanism": "io_uring", 00:12:45.902 "filename": "/dev/nullb0", 00:12:45.902 "name": "null0" 00:12:45.902 }, 00:12:45.902 "method": "bdev_xnvme_create" 00:12:45.902 }, 00:12:45.902 { 00:12:45.902 "method": "bdev_wait_for_examine" 00:12:45.902 } 00:12:45.902 ] 00:12:45.902 } 00:12:45.902 ] 00:12:45.902 } 00:12:45.902 [2024-11-18 06:46:38.836094] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:12:45.902 [2024-11-18 06:46:38.836204] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80541 ] 00:12:46.163 [2024-11-18 06:46:38.991391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.163 [2024-11-18 06:46:39.015230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.646  [2024-11-18T06:46:41.306Z] Copying: 319/1024 [MB] (319 MBps) [2024-11-18T06:46:42.692Z] Copying: 639/1024 [MB] (320 MBps) [2024-11-18T06:46:42.692Z] Copying: 960/1024 [MB] (320 MBps) [2024-11-18T06:46:42.953Z] Copying: 1024/1024 [MB] (average 320 MBps) 00:12:49.866 00:12:49.866 06:46:42 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:49.866 06:46:42 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:49.866 00:12:49.866 real 0m17.079s 00:12:49.866 user 0m14.072s 00:12:49.866 sys 0m2.496s 00:12:49.866 ************************************ 00:12:49.866 06:46:42 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:49.866 06:46:42 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:49.866 END TEST xnvme_to_malloc_dd_copy 00:12:49.866 ************************************ 00:12:49.866 06:46:42 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:49.866 06:46:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:49.866 06:46:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:49.866 06:46:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:49.866 ************************************ 00:12:49.866 START TEST xnvme_bdevperf 00:12:49.866 ************************************ 00:12:49.866 06:46:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:49.866 06:46:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:49.866 06:46:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:49.866 06:46:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:49.866 06:46:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:49.866 06:46:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:49.866 06:46:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:49.866 06:46:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:49.866 06:46:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:49.866 06:46:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:49.866 06:46:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:49.866 06:46:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:49.866 06:46:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:49.866 06:46:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:49.866 06:46:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:49.866 06:46:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:49.866 06:46:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:49.866 06:46:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:49.866 06:46:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:49.866 06:46:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:49.866 06:46:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:49.866 { 00:12:49.866 "subsystems": [ 00:12:49.866 { 00:12:49.866 "subsystem": "bdev", 00:12:49.866 "config": [ 00:12:49.866 { 00:12:49.866 "params": { 00:12:49.866 "io_mechanism": "libaio", 00:12:49.866 "filename": "/dev/nullb0", 00:12:49.866 "name": "null0" 00:12:49.866 }, 00:12:49.866 "method": "bdev_xnvme_create" 00:12:49.866 }, 00:12:49.866 { 00:12:49.866 "method": "bdev_wait_for_examine" 00:12:49.866 } 00:12:49.866 ] 00:12:49.866 } 00:12:49.866 ] 00:12:49.866 } 00:12:49.866 [2024-11-18 06:46:42.916618] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:12:49.866 [2024-11-18 06:46:42.916727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80618 ] 00:12:50.127 [2024-11-18 06:46:43.071439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.127 [2024-11-18 06:46:43.090267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.127 Running I/O for 5 seconds... 00:12:52.145 209088.00 IOPS, 816.75 MiB/s [2024-11-18T06:46:46.617Z] 209248.00 IOPS, 817.38 MiB/s [2024-11-18T06:46:47.190Z] 209258.67 IOPS, 817.42 MiB/s [2024-11-18T06:46:48.578Z] 209200.00 IOPS, 817.19 MiB/s [2024-11-18T06:46:48.578Z] 209228.80 IOPS, 817.30 MiB/s 00:12:55.491 Latency(us) 00:12:55.491 [2024-11-18T06:46:48.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.491 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:55.491 null0 : 5.00 209161.53 817.04 0.00 0.00 303.74 299.32 1524.97 00:12:55.491 [2024-11-18T06:46:48.578Z] =================================================================================================================== 00:12:55.491 [2024-11-18T06:46:48.578Z] Total : 209161.53 817.04 0.00 0.00 303.74 299.32 1524.97 00:12:55.491 06:46:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:55.491 06:46:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:55.491 06:46:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:55.491 06:46:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:55.491 06:46:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:55.491 06:46:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:55.491 { 00:12:55.491 "subsystems": [ 00:12:55.491 { 00:12:55.491 "subsystem": "bdev", 00:12:55.491 "config": [ 00:12:55.491 { 00:12:55.491 "params": { 00:12:55.491 "io_mechanism": "io_uring", 00:12:55.491 "filename": "/dev/nullb0", 00:12:55.491 "name": "null0" 00:12:55.491 }, 00:12:55.491 "method": "bdev_xnvme_create" 00:12:55.491 }, 00:12:55.491 { 00:12:55.491 "method": "bdev_wait_for_examine" 00:12:55.491 } 00:12:55.491 ] 00:12:55.491 } 00:12:55.491 ] 00:12:55.491 } 00:12:55.491 [2024-11-18 06:46:48.383197] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:12:55.491 [2024-11-18 06:46:48.383313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80681 ] 00:12:55.491 [2024-11-18 06:46:48.540627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.491 [2024-11-18 06:46:48.565588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.753 Running I/O for 5 seconds... 00:12:57.641 176128.00 IOPS, 688.00 MiB/s [2024-11-18T06:46:51.673Z] 205600.00 IOPS, 803.12 MiB/s [2024-11-18T06:46:53.057Z] 216384.00 IOPS, 845.25 MiB/s [2024-11-18T06:46:54.001Z] 221760.00 IOPS, 866.25 MiB/s [2024-11-18T06:46:54.001Z] 224972.80 IOPS, 878.80 MiB/s 00:13:00.914 Latency(us) 00:13:00.915 [2024-11-18T06:46:54.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:00.915 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:00.915 null0 : 5.00 224898.34 878.51 0.00 0.00 282.28 263.09 2192.94 00:13:00.915 [2024-11-18T06:46:54.002Z] =================================================================================================================== 00:13:00.915 [2024-11-18T06:46:54.002Z] Total : 224898.34 878.51 0.00 0.00 282.28 263.09 2192.94 00:13:00.915 06:46:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:13:00.915 06:46:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:13:00.915 00:13:00.915 real 0m10.975s 00:13:00.915 user 0m8.657s 00:13:00.915 sys 0m2.087s 00:13:00.915 06:46:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:00.915 ************************************ 00:13:00.915 END TEST xnvme_bdevperf 00:13:00.915 ************************************ 00:13:00.915 06:46:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:00.915 00:13:00.915 real 0m28.318s 00:13:00.915 user 0m22.835s 00:13:00.915 sys 0m4.709s 00:13:00.915 06:46:53 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:00.915 ************************************ 00:13:00.915 END TEST nvme_xnvme 00:13:00.915 ************************************ 00:13:00.915 06:46:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:00.915 06:46:53 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:00.915 06:46:53 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:00.915 06:46:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:00.915 06:46:53 -- common/autotest_common.sh@10 -- # set +x 00:13:00.915 ************************************ 00:13:00.915 START TEST blockdev_xnvme 00:13:00.915 ************************************ 00:13:00.915 06:46:53 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:00.915 * Looking for test storage... 00:13:00.915 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:00.915 06:46:53 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:00.915 06:46:53 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:13:00.915 06:46:53 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:01.176 06:46:54 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:01.176 06:46:54 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:01.176 06:46:54 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:01.176 06:46:54 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:01.176 06:46:54 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:01.176 06:46:54 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:01.176 06:46:54 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:01.176 06:46:54 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:01.176 06:46:54 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:01.176 06:46:54 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:01.176 06:46:54 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:01.176 06:46:54 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:01.176 06:46:54 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:01.176 06:46:54 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:13:01.176 06:46:54 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:01.176 06:46:54 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:01.176 06:46:54 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:01.176 06:46:54 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:01.176 06:46:54 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:01.176 06:46:54 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:01.176 06:46:54 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:01.176 06:46:54 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:01.176 06:46:54 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:01.176 06:46:54 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:01.176 06:46:54 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:01.176 06:46:54 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:01.176 06:46:54 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:01.176 06:46:54 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:01.176 06:46:54 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:13:01.176 06:46:54 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:01.176 06:46:54 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:01.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.176 --rc genhtml_branch_coverage=1 00:13:01.176 --rc genhtml_function_coverage=1 00:13:01.176 --rc genhtml_legend=1 00:13:01.176 --rc geninfo_all_blocks=1 00:13:01.176 --rc geninfo_unexecuted_blocks=1 00:13:01.176 00:13:01.176 ' 00:13:01.176 06:46:54 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:01.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.176 --rc genhtml_branch_coverage=1 00:13:01.176 --rc genhtml_function_coverage=1 00:13:01.176 --rc genhtml_legend=1 00:13:01.176 --rc geninfo_all_blocks=1 00:13:01.176 --rc geninfo_unexecuted_blocks=1 00:13:01.176 00:13:01.176 ' 00:13:01.177 06:46:54 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:01.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.177 --rc genhtml_branch_coverage=1 00:13:01.177 --rc genhtml_function_coverage=1 00:13:01.177 --rc genhtml_legend=1 00:13:01.177 --rc geninfo_all_blocks=1 00:13:01.177 --rc geninfo_unexecuted_blocks=1 00:13:01.177 00:13:01.177 ' 00:13:01.177 06:46:54 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:01.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.177 --rc genhtml_branch_coverage=1 00:13:01.177 --rc genhtml_function_coverage=1 00:13:01.177 --rc genhtml_legend=1 00:13:01.177 --rc geninfo_all_blocks=1 00:13:01.177 --rc geninfo_unexecuted_blocks=1 00:13:01.177 00:13:01.177 ' 00:13:01.177 06:46:54 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:01.177 06:46:54 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:13:01.177 06:46:54 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:01.177 06:46:54 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:01.177 06:46:54 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:01.177 06:46:54 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:01.177 06:46:54 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:01.177 06:46:54 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:01.177 06:46:54 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:13:01.177 06:46:54 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:13:01.177 06:46:54 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:13:01.177 06:46:54 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:13:01.177 06:46:54 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:13:01.177 06:46:54 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:13:01.177 06:46:54 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:13:01.177 06:46:54 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:13:01.177 06:46:54 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:13:01.177 06:46:54 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:13:01.177 06:46:54 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:13:01.177 06:46:54 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:13:01.177 06:46:54 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:13:01.177 06:46:54 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:13:01.177 06:46:54 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:13:01.177 06:46:54 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:13:01.177 06:46:54 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=80818 00:13:01.177 06:46:54 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:01.177 06:46:54 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 80818 00:13:01.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.177 06:46:54 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 80818 ']' 00:13:01.177 06:46:54 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.177 06:46:54 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:01.177 06:46:54 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.177 06:46:54 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:01.177 06:46:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:01.177 06:46:54 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:01.177 [2024-11-18 06:46:54.124443] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:13:01.177 [2024-11-18 06:46:54.124560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80818 ] 00:13:01.438 [2024-11-18 06:46:54.277047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.438 [2024-11-18 06:46:54.294531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.010 06:46:54 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:02.011 06:46:54 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:13:02.011 06:46:54 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:13:02.011 06:46:54 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:13:02.011 06:46:54 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:13:02.011 06:46:54 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:13:02.011 06:46:54 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:02.270 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:02.270 Waiting for block devices as requested 00:13:02.531 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:02.531 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:02.531 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:02.531 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:07.824 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:07.824 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:07.824 06:47:00 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:07.824 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:07.824 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:13:07.824 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:07.824 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:07.824 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:07.824 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:13:07.825 06:47:00 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.825 06:47:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:13:07.825 nvme0n1 00:13:07.825 nvme1n1 00:13:07.825 nvme2n1 00:13:07.825 nvme2n2 00:13:07.825 nvme2n3 00:13:07.825 nvme3n1 00:13:07.825 06:47:00 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:13:07.825 06:47:00 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.825 06:47:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:07.825 06:47:00 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:13:07.825 06:47:00 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.825 06:47:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:07.825 06:47:00 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:13:07.825 06:47:00 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.825 06:47:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:07.825 06:47:00 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:07.825 06:47:00 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.825 06:47:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:07.825 06:47:00 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:13:07.825 06:47:00 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.825 06:47:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:07.825 06:47:00 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "f1b775f2-9521-4b7d-b194-fd51c5316f05"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f1b775f2-9521-4b7d-b194-fd51c5316f05",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "8075aa0f-9748-49b3-ba7d-4ed15a4566f1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8075aa0f-9748-49b3-ba7d-4ed15a4566f1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "0d7de33b-bf06-4bcd-a098-5f457ba332c5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0d7de33b-bf06-4bcd-a098-5f457ba332c5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "ee40d08c-febd-4542-8419-c3b52761d6a5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ee40d08c-febd-4542-8419-c3b52761d6a5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "860a173d-9454-4908-9d71-8a9cce65b0be"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "860a173d-9454-4908-9d71-8a9cce65b0be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "b874a3d9-cab7-4cae-b582-c442459f3969"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b874a3d9-cab7-4cae-b582-c442459f3969",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:13:07.825 06:47:00 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 80818 00:13:07.825 06:47:00 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 80818 ']' 00:13:07.825 06:47:00 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 80818 00:13:07.825 06:47:00 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:13:07.825 06:47:00 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:07.825 06:47:00 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80818 00:13:07.825 06:47:00 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:07.825 06:47:00 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:07.825 killing process with pid 80818 00:13:07.825 06:47:00 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80818' 00:13:07.825 06:47:00 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 80818 00:13:07.825 06:47:00 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 80818 00:13:08.087 06:47:01 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:08.087 06:47:01 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:08.087 06:47:01 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:08.087 06:47:01 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.087 06:47:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:08.087 ************************************ 00:13:08.087 START TEST bdev_hello_world 00:13:08.087 ************************************ 00:13:08.087 06:47:01 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:08.087 [2024-11-18 06:47:01.117596] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:13:08.087 [2024-11-18 06:47:01.117719] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81166 ] 00:13:08.347 [2024-11-18 06:47:01.271451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.347 [2024-11-18 06:47:01.290059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.608 [2024-11-18 06:47:01.447699] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:08.608 [2024-11-18 06:47:01.447743] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:08.608 [2024-11-18 06:47:01.447756] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:08.608 [2024-11-18 06:47:01.449271] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:08.608 [2024-11-18 06:47:01.449690] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:08.608 [2024-11-18 06:47:01.449710] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:08.609 [2024-11-18 06:47:01.450002] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:08.609 00:13:08.609 [2024-11-18 06:47:01.450031] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:08.609 00:13:08.609 real 0m0.507s 00:13:08.609 user 0m0.260s 00:13:08.609 sys 0m0.139s 00:13:08.609 06:47:01 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.609 06:47:01 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:08.609 ************************************ 00:13:08.609 END TEST bdev_hello_world 00:13:08.609 ************************************ 00:13:08.609 06:47:01 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:13:08.609 06:47:01 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:08.609 06:47:01 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.609 06:47:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:08.609 ************************************ 00:13:08.609 START TEST bdev_bounds 00:13:08.609 ************************************ 00:13:08.609 06:47:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:13:08.609 Process bdevio pid: 81190 00:13:08.609 06:47:01 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=81190 00:13:08.609 06:47:01 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:08.609 06:47:01 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 81190' 00:13:08.609 06:47:01 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 81190 00:13:08.609 06:47:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 81190 ']' 00:13:08.609 06:47:01 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:08.609 06:47:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.609 06:47:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:08.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.609 06:47:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.609 06:47:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:08.609 06:47:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:08.609 [2024-11-18 06:47:01.685206] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:13:08.609 [2024-11-18 06:47:01.685328] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81190 ] 00:13:08.870 [2024-11-18 06:47:01.842359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:08.870 [2024-11-18 06:47:01.862395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.870 [2024-11-18 06:47:01.862938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.870 [2024-11-18 06:47:01.862969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.813 06:47:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:09.813 06:47:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:13:09.813 06:47:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:09.813 I/O targets: 00:13:09.813 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:09.813 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:09.813 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:09.813 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:09.813 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:09.813 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:09.813 00:13:09.813 00:13:09.813 CUnit - A unit testing framework for C - Version 2.1-3 00:13:09.813 http://cunit.sourceforge.net/ 00:13:09.813 00:13:09.813 00:13:09.813 Suite: bdevio tests on: nvme3n1 00:13:09.813 Test: blockdev write read block ...passed 00:13:09.813 Test: blockdev write zeroes read block ...passed 00:13:09.813 Test: blockdev write zeroes read no split ...passed 00:13:09.813 Test: blockdev write zeroes read split ...passed 00:13:09.813 Test: blockdev write zeroes read split partial ...passed 00:13:09.813 Test: blockdev reset ...passed 00:13:09.813 Test: blockdev write read 8 blocks ...passed 00:13:09.814 Test: blockdev write read size > 128k ...passed 00:13:09.814 Test: blockdev write read invalid size ...passed 00:13:09.814 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:09.814 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:09.814 Test: blockdev write read max offset ...passed 00:13:09.814 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:09.814 Test: blockdev writev readv 8 blocks ...passed 00:13:09.814 Test: blockdev writev readv 30 x 1block ...passed 00:13:09.814 Test: blockdev writev readv block ...passed 00:13:09.814 Test: blockdev writev readv size > 128k ...passed 00:13:09.814 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:09.814 Test: blockdev comparev and writev ...passed 00:13:09.814 Test: blockdev nvme passthru rw ...passed 00:13:09.814 Test: blockdev nvme passthru vendor specific ...passed 00:13:09.814 Test: blockdev nvme admin passthru ...passed 00:13:09.814 Test: blockdev copy ...passed 00:13:09.814 Suite: bdevio tests on: nvme2n3 00:13:09.814 Test: blockdev write read block ...passed 00:13:09.814 Test: blockdev write zeroes read block ...passed 00:13:09.814 Test: blockdev write zeroes read no split ...passed 00:13:09.814 Test: blockdev write zeroes read split ...passed 00:13:09.814 Test: blockdev write zeroes read split partial ...passed 00:13:09.814 Test: blockdev reset ...passed 00:13:09.814 Test: blockdev write read 8 blocks ...passed 00:13:09.814 Test: blockdev write read size > 128k ...passed 00:13:09.814 Test: blockdev write read invalid size ...passed 00:13:09.814 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:09.814 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:09.814 Test: blockdev write read max offset ...passed 00:13:09.814 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:09.814 Test: blockdev writev readv 8 blocks ...passed 00:13:09.814 Test: blockdev writev readv 30 x 1block ...passed 00:13:09.814 Test: blockdev writev readv block ...passed 00:13:09.814 Test: blockdev writev readv size > 128k ...passed 00:13:09.814 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:09.814 Test: blockdev comparev and writev ...passed 00:13:09.814 Test: blockdev nvme passthru rw ...passed 00:13:09.814 Test: blockdev nvme passthru vendor specific ...passed 00:13:09.814 Test: blockdev nvme admin passthru ...passed 00:13:09.814 Test: blockdev copy ...passed 00:13:09.814 Suite: bdevio tests on: nvme2n2 00:13:09.814 Test: blockdev write read block ...passed 00:13:09.814 Test: blockdev write zeroes read block ...passed 00:13:09.814 Test: blockdev write zeroes read no split ...passed 00:13:09.814 Test: blockdev write zeroes read split ...passed 00:13:09.814 Test: blockdev write zeroes read split partial ...passed 00:13:09.814 Test: blockdev reset ...passed 00:13:09.814 Test: blockdev write read 8 blocks ...passed 00:13:09.814 Test: blockdev write read size > 128k ...passed 00:13:09.814 Test: blockdev write read invalid size ...passed 00:13:09.814 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:09.814 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:09.814 Test: blockdev write read max offset ...passed 00:13:09.814 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:09.814 Test: blockdev writev readv 8 blocks ...passed 00:13:09.814 Test: blockdev writev readv 30 x 1block ...passed 00:13:09.814 Test: blockdev writev readv block ...passed 00:13:09.814 Test: blockdev writev readv size > 128k ...passed 00:13:09.814 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:09.814 Test: blockdev comparev and writev ...passed 00:13:09.814 Test: blockdev nvme passthru rw ...passed 00:13:09.814 Test: blockdev nvme passthru vendor specific ...passed 00:13:09.814 Test: blockdev nvme admin passthru ...passed 00:13:09.814 Test: blockdev copy ...passed 00:13:09.814 Suite: bdevio tests on: nvme2n1 00:13:09.814 Test: blockdev write read block ...passed 00:13:09.814 Test: blockdev write zeroes read block ...passed 00:13:09.814 Test: blockdev write zeroes read no split ...passed 00:13:09.814 Test: blockdev write zeroes read split ...passed 00:13:09.814 Test: blockdev write zeroes read split partial ...passed 00:13:09.814 Test: blockdev reset ...passed 00:13:09.814 Test: blockdev write read 8 blocks ...passed 00:13:09.814 Test: blockdev write read size > 128k ...passed 00:13:09.814 Test: blockdev write read invalid size ...passed 00:13:09.814 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:09.814 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:09.814 Test: blockdev write read max offset ...passed 00:13:09.814 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:09.814 Test: blockdev writev readv 8 blocks ...passed 00:13:09.814 Test: blockdev writev readv 30 x 1block ...passed 00:13:09.814 Test: blockdev writev readv block ...passed 00:13:09.814 Test: blockdev writev readv size > 128k ...passed 00:13:09.814 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:09.814 Test: blockdev comparev and writev ...passed 00:13:09.814 Test: blockdev nvme passthru rw ...passed 00:13:09.814 Test: blockdev nvme passthru vendor specific ...passed 00:13:09.814 Test: blockdev nvme admin passthru ...passed 00:13:09.814 Test: blockdev copy ...passed 00:13:09.814 Suite: bdevio tests on: nvme1n1 00:13:09.814 Test: blockdev write read block ...passed 00:13:09.814 Test: blockdev write zeroes read block ...passed 00:13:09.814 Test: blockdev write zeroes read no split ...passed 00:13:09.814 Test: blockdev write zeroes read split ...passed 00:13:09.814 Test: blockdev write zeroes read split partial ...passed 00:13:09.814 Test: blockdev reset ...passed 00:13:09.814 Test: blockdev write read 8 blocks ...passed 00:13:09.814 Test: blockdev write read size > 128k ...passed 00:13:09.814 Test: blockdev write read invalid size ...passed 00:13:09.814 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:09.814 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:09.814 Test: blockdev write read max offset ...passed 00:13:09.814 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:09.814 Test: blockdev writev readv 8 blocks ...passed 00:13:09.814 Test: blockdev writev readv 30 x 1block ...passed 00:13:09.814 Test: blockdev writev readv block ...passed 00:13:09.814 Test: blockdev writev readv size > 128k ...passed 00:13:09.814 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:09.814 Test: blockdev comparev and writev ...passed 00:13:09.814 Test: blockdev nvme passthru rw ...passed 00:13:09.814 Test: blockdev nvme passthru vendor specific ...passed 00:13:09.814 Test: blockdev nvme admin passthru ...passed 00:13:09.814 Test: blockdev copy ...passed 00:13:09.814 Suite: bdevio tests on: nvme0n1 00:13:09.814 Test: blockdev write read block ...passed 00:13:09.814 Test: blockdev write zeroes read block ...passed 00:13:09.814 Test: blockdev write zeroes read no split ...passed 00:13:09.814 Test: blockdev write zeroes read split ...passed 00:13:09.814 Test: blockdev write zeroes read split partial ...passed 00:13:09.814 Test: blockdev reset ...passed 00:13:09.814 Test: blockdev write read 8 blocks ...passed 00:13:09.814 Test: blockdev write read size > 128k ...passed 00:13:09.814 Test: blockdev write read invalid size ...passed 00:13:09.814 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:09.814 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:09.814 Test: blockdev write read max offset ...passed 00:13:09.814 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:09.814 Test: blockdev writev readv 8 blocks ...passed 00:13:09.814 Test: blockdev writev readv 30 x 1block ...passed 00:13:09.814 Test: blockdev writev readv block ...passed 00:13:09.814 Test: blockdev writev readv size > 128k ...passed 00:13:09.814 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:09.814 Test: blockdev comparev and writev ...passed 00:13:09.814 Test: blockdev nvme passthru rw ...passed 00:13:09.814 Test: blockdev nvme passthru vendor specific ...passed 00:13:09.814 Test: blockdev nvme admin passthru ...passed 00:13:09.814 Test: blockdev copy ...passed 00:13:09.814 00:13:09.814 Run Summary: Type Total Ran Passed Failed Inactive 00:13:09.814 suites 6 6 n/a 0 0 00:13:09.814 tests 138 138 138 0 0 00:13:09.814 asserts 780 780 780 0 n/a 00:13:09.814 00:13:09.814 Elapsed time = 0.235 seconds 00:13:09.814 0 00:13:09.814 06:47:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 81190 00:13:09.814 06:47:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 81190 ']' 00:13:09.814 06:47:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 81190 00:13:09.814 06:47:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:13:09.814 06:47:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:09.814 06:47:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81190 00:13:09.814 06:47:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:09.814 06:47:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:09.814 06:47:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81190' 00:13:09.814 killing process with pid 81190 00:13:09.814 06:47:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 81190 00:13:09.814 06:47:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 81190 00:13:09.814 06:47:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:09.814 00:13:09.814 real 0m1.257s 00:13:09.814 user 0m3.251s 00:13:09.814 sys 0m0.240s 00:13:09.814 06:47:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.814 06:47:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:09.814 ************************************ 00:13:09.814 END TEST bdev_bounds 00:13:09.814 ************************************ 00:13:10.075 06:47:02 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:10.075 06:47:02 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:10.075 06:47:02 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.075 06:47:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:10.075 ************************************ 00:13:10.075 START TEST bdev_nbd 00:13:10.075 ************************************ 00:13:10.075 06:47:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:10.075 06:47:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:10.075 06:47:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:10.075 06:47:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:10.075 06:47:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:10.075 06:47:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:10.075 06:47:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:10.075 06:47:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:13:10.075 06:47:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:10.075 06:47:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:10.075 06:47:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:10.075 06:47:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:13:10.075 06:47:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:10.075 06:47:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:10.075 06:47:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:10.075 06:47:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:10.075 06:47:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=81241 00:13:10.075 06:47:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:10.075 06:47:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 81241 /var/tmp/spdk-nbd.sock 00:13:10.075 06:47:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 81241 ']' 00:13:10.075 06:47:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:10.075 06:47:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:10.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:10.075 06:47:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:10.075 06:47:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:10.075 06:47:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:10.075 06:47:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:10.075 [2024-11-18 06:47:03.005137] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:13:10.075 [2024-11-18 06:47:03.005251] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:10.075 [2024-11-18 06:47:03.153788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.336 [2024-11-18 06:47:03.171055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.907 06:47:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:10.907 06:47:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:13:10.907 06:47:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:10.907 06:47:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:10.907 06:47:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:10.907 06:47:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:10.907 06:47:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:10.907 06:47:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:10.907 06:47:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:10.907 06:47:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:10.907 06:47:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:10.907 06:47:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:10.907 06:47:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:10.907 06:47:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:10.907 06:47:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:11.169 06:47:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:11.169 06:47:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:11.169 06:47:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:11.169 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:11.169 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:11.169 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:11.169 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:11.169 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:11.169 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:11.169 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:11.169 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:11.169 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.169 1+0 records in 00:13:11.169 1+0 records out 00:13:11.169 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000470562 s, 8.7 MB/s 00:13:11.169 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.169 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:11.169 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.169 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:11.169 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:11.169 06:47:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:11.169 06:47:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:11.169 06:47:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.432 1+0 records in 00:13:11.432 1+0 records out 00:13:11.432 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116227 s, 3.5 MB/s 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.432 1+0 records in 00:13:11.432 1+0 records out 00:13:11.432 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000950497 s, 4.3 MB/s 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:11.432 06:47:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:13:11.695 06:47:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:11.695 06:47:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:11.695 06:47:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:11.695 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:13:11.695 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:11.695 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:11.695 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:11.695 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:13:11.695 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:11.695 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:11.695 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:11.695 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.695 1+0 records in 00:13:11.695 1+0 records out 00:13:11.695 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000896235 s, 4.6 MB/s 00:13:11.695 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.695 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:11.695 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.695 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:11.695 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:11.695 06:47:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:11.695 06:47:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:11.695 06:47:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:13:11.956 06:47:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:11.956 06:47:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:11.956 06:47:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:11.956 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:13:11.956 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:11.956 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:11.956 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:11.956 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:13:11.956 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:11.957 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:11.957 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:11.957 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.957 1+0 records in 00:13:11.957 1+0 records out 00:13:11.957 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104195 s, 3.9 MB/s 00:13:11.957 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.957 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:11.957 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.957 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:11.957 06:47:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:11.957 06:47:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:11.957 06:47:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:11.957 06:47:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:12.218 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:12.218 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:12.218 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:12.218 06:47:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:13:12.218 06:47:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:12.218 06:47:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:12.218 06:47:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:12.218 06:47:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:13:12.218 06:47:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:12.218 06:47:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:12.218 06:47:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:12.218 06:47:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:12.218 1+0 records in 00:13:12.219 1+0 records out 00:13:12.219 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115319 s, 3.6 MB/s 00:13:12.219 06:47:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.219 06:47:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:12.219 06:47:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.219 06:47:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:12.219 06:47:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:12.219 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:12.219 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:12.219 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:12.479 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:12.479 { 00:13:12.479 "nbd_device": "/dev/nbd0", 00:13:12.479 "bdev_name": "nvme0n1" 00:13:12.479 }, 00:13:12.479 { 00:13:12.479 "nbd_device": "/dev/nbd1", 00:13:12.479 "bdev_name": "nvme1n1" 00:13:12.479 }, 00:13:12.479 { 00:13:12.479 "nbd_device": "/dev/nbd2", 00:13:12.479 "bdev_name": "nvme2n1" 00:13:12.479 }, 00:13:12.479 { 00:13:12.479 "nbd_device": "/dev/nbd3", 00:13:12.479 "bdev_name": "nvme2n2" 00:13:12.479 }, 00:13:12.479 { 00:13:12.479 "nbd_device": "/dev/nbd4", 00:13:12.479 "bdev_name": "nvme2n3" 00:13:12.479 }, 00:13:12.479 { 00:13:12.479 "nbd_device": "/dev/nbd5", 00:13:12.479 "bdev_name": "nvme3n1" 00:13:12.479 } 00:13:12.479 ]' 00:13:12.479 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:12.479 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:12.479 { 00:13:12.479 "nbd_device": "/dev/nbd0", 00:13:12.479 "bdev_name": "nvme0n1" 00:13:12.479 }, 00:13:12.480 { 00:13:12.480 "nbd_device": "/dev/nbd1", 00:13:12.480 "bdev_name": "nvme1n1" 00:13:12.480 }, 00:13:12.480 { 00:13:12.480 "nbd_device": "/dev/nbd2", 00:13:12.480 "bdev_name": "nvme2n1" 00:13:12.480 }, 00:13:12.480 { 00:13:12.480 "nbd_device": "/dev/nbd3", 00:13:12.480 "bdev_name": "nvme2n2" 00:13:12.480 }, 00:13:12.480 { 00:13:12.480 "nbd_device": "/dev/nbd4", 00:13:12.480 "bdev_name": "nvme2n3" 00:13:12.480 }, 00:13:12.480 { 00:13:12.480 "nbd_device": "/dev/nbd5", 00:13:12.480 "bdev_name": "nvme3n1" 00:13:12.480 } 00:13:12.480 ]' 00:13:12.480 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:12.480 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:12.480 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:12.480 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:12.480 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:12.480 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:12.480 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:12.480 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:12.741 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:12.741 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:12.741 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:12.741 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:12.741 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:12.741 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:12.741 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:12.741 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:12.741 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:12.741 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:13.002 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:13.002 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:13.002 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:13.002 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:13.002 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:13.002 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:13.002 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:13.002 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:13.002 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:13.002 06:47:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:13.263 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:13.263 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:13.263 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:13.263 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:13.263 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:13.263 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:13.263 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:13.263 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:13.263 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:13.263 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:13.525 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:13.525 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:13.525 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:13.525 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:13.525 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:13.525 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:13.525 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:13.525 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:13.525 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:13.525 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:13.786 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:13.786 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:13.786 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:13.786 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:13.786 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:13.786 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:13.786 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:13.786 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:13.786 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:13.786 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:13.786 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:13.786 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:13.786 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:13.786 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:13.786 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:13.786 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:13.786 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:13.786 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:13.786 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:13.786 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:13.786 06:47:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:14.048 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:14.048 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:14.048 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:14.048 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:14.048 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:14.048 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:14.048 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:14.048 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:14.048 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:14.048 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:14.048 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:14.048 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:14.048 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:14.048 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:14.048 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:14.048 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:14.048 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:14.048 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:14.048 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:14.048 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:14.048 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:14.048 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:14.048 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:14.048 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:14.048 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:14.048 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:14.048 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:14.048 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:14.310 /dev/nbd0 00:13:14.310 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:14.310 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:14.310 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:14.310 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:14.310 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:14.310 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:14.310 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:14.310 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:14.310 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:14.310 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:14.310 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:14.310 1+0 records in 00:13:14.310 1+0 records out 00:13:14.310 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00133947 s, 3.1 MB/s 00:13:14.310 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:14.310 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:14.310 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:14.310 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:14.310 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:14.310 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:14.310 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:14.310 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:14.571 /dev/nbd1 00:13:14.571 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:14.571 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:14.571 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:14.571 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:14.571 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:14.571 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:14.571 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:14.571 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:14.571 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:14.571 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:14.571 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:14.571 1+0 records in 00:13:14.571 1+0 records out 00:13:14.571 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00159262 s, 2.6 MB/s 00:13:14.571 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:14.571 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:14.571 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:14.571 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:14.571 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:14.571 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:14.571 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:14.571 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:14.832 /dev/nbd10 00:13:14.832 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:14.832 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:14.832 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:13:14.832 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:14.832 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:14.832 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:14.832 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:13:14.832 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:14.832 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:14.832 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:14.832 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:14.832 1+0 records in 00:13:14.832 1+0 records out 00:13:14.832 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000906215 s, 4.5 MB/s 00:13:14.832 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:14.832 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:14.832 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:14.832 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:14.832 06:47:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:14.832 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:14.832 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:14.832 06:47:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:15.093 /dev/nbd11 00:13:15.093 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:15.093 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:15.093 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:13:15.093 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:15.093 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:15.093 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:15.093 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:13:15.093 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:15.093 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:15.093 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:15.093 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:15.093 1+0 records in 00:13:15.093 1+0 records out 00:13:15.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000943998 s, 4.3 MB/s 00:13:15.093 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.093 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:15.093 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.093 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:15.093 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:15.093 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:15.093 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:15.093 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:15.355 /dev/nbd12 00:13:15.355 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:15.355 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:15.355 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:13:15.355 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:15.355 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:15.355 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:15.355 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:13:15.355 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:15.355 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:15.355 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:15.355 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:15.355 1+0 records in 00:13:15.355 1+0 records out 00:13:15.355 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108559 s, 3.8 MB/s 00:13:15.355 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.355 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:15.355 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.355 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:15.355 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:15.355 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:15.355 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:15.355 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:15.617 /dev/nbd13 00:13:15.617 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:15.617 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:15.617 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:13:15.617 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:15.617 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:15.617 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:15.617 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:13:15.617 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:15.617 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:15.617 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:15.617 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:15.617 1+0 records in 00:13:15.617 1+0 records out 00:13:15.617 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110935 s, 3.7 MB/s 00:13:15.617 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.617 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:15.617 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.617 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:15.617 06:47:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:15.617 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:15.617 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:15.617 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:15.617 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:15.617 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:15.879 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:15.879 { 00:13:15.879 "nbd_device": "/dev/nbd0", 00:13:15.879 "bdev_name": "nvme0n1" 00:13:15.879 }, 00:13:15.879 { 00:13:15.879 "nbd_device": "/dev/nbd1", 00:13:15.879 "bdev_name": "nvme1n1" 00:13:15.879 }, 00:13:15.879 { 00:13:15.879 "nbd_device": "/dev/nbd10", 00:13:15.879 "bdev_name": "nvme2n1" 00:13:15.879 }, 00:13:15.879 { 00:13:15.879 "nbd_device": "/dev/nbd11", 00:13:15.879 "bdev_name": "nvme2n2" 00:13:15.879 }, 00:13:15.879 { 00:13:15.879 "nbd_device": "/dev/nbd12", 00:13:15.879 "bdev_name": "nvme2n3" 00:13:15.879 }, 00:13:15.879 { 00:13:15.879 "nbd_device": "/dev/nbd13", 00:13:15.879 "bdev_name": "nvme3n1" 00:13:15.879 } 00:13:15.879 ]' 00:13:15.879 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:15.879 { 00:13:15.879 "nbd_device": "/dev/nbd0", 00:13:15.879 "bdev_name": "nvme0n1" 00:13:15.879 }, 00:13:15.879 { 00:13:15.879 "nbd_device": "/dev/nbd1", 00:13:15.879 "bdev_name": "nvme1n1" 00:13:15.879 }, 00:13:15.879 { 00:13:15.879 "nbd_device": "/dev/nbd10", 00:13:15.879 "bdev_name": "nvme2n1" 00:13:15.879 }, 00:13:15.879 { 00:13:15.879 "nbd_device": "/dev/nbd11", 00:13:15.879 "bdev_name": "nvme2n2" 00:13:15.879 }, 00:13:15.879 { 00:13:15.879 "nbd_device": "/dev/nbd12", 00:13:15.879 "bdev_name": "nvme2n3" 00:13:15.879 }, 00:13:15.879 { 00:13:15.879 "nbd_device": "/dev/nbd13", 00:13:15.879 "bdev_name": "nvme3n1" 00:13:15.879 } 00:13:15.879 ]' 00:13:15.879 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:15.879 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:15.879 /dev/nbd1 00:13:15.879 /dev/nbd10 00:13:15.879 /dev/nbd11 00:13:15.879 /dev/nbd12 00:13:15.879 /dev/nbd13' 00:13:15.879 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:15.879 /dev/nbd1 00:13:15.879 /dev/nbd10 00:13:15.879 /dev/nbd11 00:13:15.879 /dev/nbd12 00:13:15.879 /dev/nbd13' 00:13:15.879 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:15.879 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:15.879 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:15.879 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:15.879 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:15.879 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:15.879 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:15.879 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:15.879 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:15.879 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:15.879 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:15.879 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:15.879 256+0 records in 00:13:15.879 256+0 records out 00:13:15.879 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00662008 s, 158 MB/s 00:13:15.879 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:15.880 06:47:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:16.141 256+0 records in 00:13:16.141 256+0 records out 00:13:16.141 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.204738 s, 5.1 MB/s 00:13:16.141 06:47:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:16.141 06:47:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:16.401 256+0 records in 00:13:16.401 256+0 records out 00:13:16.401 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.283998 s, 3.7 MB/s 00:13:16.401 06:47:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:16.401 06:47:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:16.662 256+0 records in 00:13:16.662 256+0 records out 00:13:16.662 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.236714 s, 4.4 MB/s 00:13:16.662 06:47:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:16.662 06:47:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:16.924 256+0 records in 00:13:16.924 256+0 records out 00:13:16.924 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.20112 s, 5.2 MB/s 00:13:16.924 06:47:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:16.924 06:47:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:17.185 256+0 records in 00:13:17.185 256+0 records out 00:13:17.185 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.243191 s, 4.3 MB/s 00:13:17.185 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:17.185 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:17.447 256+0 records in 00:13:17.447 256+0 records out 00:13:17.447 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.243047 s, 4.3 MB/s 00:13:17.447 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:17.447 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:17.447 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:17.447 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:17.447 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:17.447 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:17.447 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:17.447 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:17.447 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:17.447 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:17.447 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:17.447 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:17.447 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:17.447 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:17.447 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:17.447 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:17.447 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:17.447 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:17.447 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:17.447 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:17.447 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:17.447 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:17.447 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:17.447 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:17.447 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:17.447 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:17.447 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:17.709 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:17.709 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:17.709 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:17.709 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:17.709 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:17.709 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:17.709 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:17.709 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:17.709 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:17.709 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:17.971 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:17.971 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:17.971 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:17.971 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:17.971 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:17.971 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:17.971 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:17.971 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:17.971 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:17.971 06:47:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:17.971 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:17.971 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:17.971 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:17.971 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:17.971 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:17.971 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:17.971 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:17.971 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:17.971 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:17.971 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:18.232 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:18.232 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:18.232 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:18.232 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:18.232 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:18.232 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:18.232 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:18.232 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:18.232 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.233 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:18.497 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:18.497 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:18.497 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:18.497 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:18.498 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:18.498 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:18.498 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:18.498 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:18.498 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.498 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:18.809 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:18.809 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:18.809 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:18.809 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:18.809 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:18.809 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:18.809 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:18.809 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:18.809 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:18.809 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:18.809 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:19.099 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:19.099 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:19.099 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:19.099 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:19.099 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:19.099 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:19.099 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:19.099 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:19.099 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:19.099 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:19.099 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:19.099 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:19.099 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:19.099 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:19.099 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:19.100 06:47:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:19.361 malloc_lvol_verify 00:13:19.361 06:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:19.361 87c0817b-532d-4e43-98c5-90e6b519d14c 00:13:19.361 06:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:19.621 d6a047f8-3f91-4ee4-9f5a-c855b1ad4c71 00:13:19.621 06:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:19.882 /dev/nbd0 00:13:19.882 06:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:19.882 06:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:19.882 06:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:19.882 06:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:19.882 06:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:19.882 mke2fs 1.47.0 (5-Feb-2023) 00:13:19.882 Discarding device blocks: 0/4096 done 00:13:19.882 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:19.882 00:13:19.882 Allocating group tables: 0/1 done 00:13:19.882 Writing inode tables: 0/1 done 00:13:19.882 Creating journal (1024 blocks): done 00:13:19.882 Writing superblocks and filesystem accounting information: 0/1 done 00:13:19.882 00:13:19.882 06:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:19.882 06:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:19.882 06:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:19.882 06:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:19.882 06:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:19.882 06:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:19.882 06:47:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:20.142 06:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:20.142 06:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:20.142 06:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:20.142 06:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:20.142 06:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:20.142 06:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:20.142 06:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:20.142 06:47:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:20.142 06:47:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 81241 00:13:20.142 06:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 81241 ']' 00:13:20.142 06:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 81241 00:13:20.142 06:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:13:20.142 06:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:20.142 06:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81241 00:13:20.142 killing process with pid 81241 00:13:20.142 06:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:20.142 06:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:20.142 06:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81241' 00:13:20.142 06:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 81241 00:13:20.142 06:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 81241 00:13:20.405 ************************************ 00:13:20.405 END TEST bdev_nbd 00:13:20.405 ************************************ 00:13:20.405 06:47:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:20.405 00:13:20.405 real 0m10.395s 00:13:20.405 user 0m14.145s 00:13:20.405 sys 0m3.708s 00:13:20.405 06:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:20.405 06:47:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:20.405 06:47:13 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:20.405 06:47:13 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:20.405 06:47:13 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:20.405 06:47:13 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:20.405 06:47:13 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:20.405 06:47:13 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.405 06:47:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.405 ************************************ 00:13:20.405 START TEST bdev_fio 00:13:20.405 ************************************ 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:13:20.405 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:20.405 06:47:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:20.406 06:47:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:13:20.406 06:47:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.406 06:47:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:20.406 ************************************ 00:13:20.406 START TEST bdev_fio_rw_verify 00:13:20.406 ************************************ 00:13:20.406 06:47:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:20.406 06:47:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:20.406 06:47:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:20.406 06:47:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:20.406 06:47:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:20.406 06:47:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:20.406 06:47:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:13:20.406 06:47:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:20.406 06:47:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:20.667 06:47:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:20.667 06:47:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:13:20.667 06:47:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:20.667 06:47:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:20.667 06:47:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:20.667 06:47:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:13:20.667 06:47:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:20.667 06:47:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:20.667 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:20.667 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:20.667 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:20.667 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:20.668 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:20.668 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:20.668 fio-3.35 00:13:20.668 Starting 6 threads 00:13:32.909 00:13:32.909 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=81641: Mon Nov 18 06:47:24 2024 00:13:32.909 read: IOPS=13.4k, BW=52.3MiB/s (54.8MB/s)(523MiB/10002msec) 00:13:32.909 slat (usec): min=2, max=1650, avg= 6.47, stdev=13.67 00:13:32.909 clat (usec): min=76, max=10140, avg=1504.50, stdev=829.24 00:13:32.909 lat (usec): min=79, max=10153, avg=1510.96, stdev=829.90 00:13:32.909 clat percentiles (usec): 00:13:32.909 | 50.000th=[ 1401], 99.000th=[ 4015], 99.900th=[ 5538], 99.990th=[ 7504], 00:13:32.909 | 99.999th=[10159] 00:13:32.909 write: IOPS=13.7k, BW=53.7MiB/s (56.3MB/s)(537MiB/10002msec); 0 zone resets 00:13:32.909 slat (usec): min=12, max=7753, avg=40.65, stdev=144.81 00:13:32.909 clat (usec): min=76, max=8814, avg=1698.40, stdev=897.94 00:13:32.909 lat (usec): min=89, max=9641, avg=1739.06, stdev=911.45 00:13:32.909 clat percentiles (usec): 00:13:32.909 | 50.000th=[ 1582], 99.000th=[ 4424], 99.900th=[ 5932], 99.990th=[ 8455], 00:13:32.909 | 99.999th=[ 8848] 00:13:32.909 bw ( KiB/s): min=46030, max=106715, per=100.00%, avg=55247.68, stdev=2475.78, samples=114 00:13:32.909 iops : min=11506, max=26677, avg=13811.11, stdev=618.91, samples=114 00:13:32.909 lat (usec) : 100=0.01%, 250=1.91%, 500=6.08%, 750=7.87%, 1000=9.41% 00:13:32.909 lat (msec) : 2=47.15%, 4=26.12%, 10=1.44%, 20=0.01% 00:13:32.909 cpu : usr=46.19%, sys=30.85%, ctx=4832, majf=0, minf=14042 00:13:32.909 IO depths : 1=11.6%, 2=24.0%, 4=51.0%, 8=13.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:32.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.909 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.909 issued rwts: total=133831,137403,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.909 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:32.909 00:13:32.909 Run status group 0 (all jobs): 00:13:32.909 READ: bw=52.3MiB/s (54.8MB/s), 52.3MiB/s-52.3MiB/s (54.8MB/s-54.8MB/s), io=523MiB (548MB), run=10002-10002msec 00:13:32.909 WRITE: bw=53.7MiB/s (56.3MB/s), 53.7MiB/s-53.7MiB/s (56.3MB/s-56.3MB/s), io=537MiB (563MB), run=10002-10002msec 00:13:32.909 ----------------------------------------------------- 00:13:32.909 Suppressions used: 00:13:32.909 count bytes template 00:13:32.909 6 48 /usr/src/fio/parse.c 00:13:32.909 3483 334368 /usr/src/fio/iolog.c 00:13:32.909 1 8 libtcmalloc_minimal.so 00:13:32.909 1 904 libcrypto.so 00:13:32.909 ----------------------------------------------------- 00:13:32.909 00:13:32.909 00:13:32.909 real 0m11.236s 00:13:32.909 user 0m28.467s 00:13:32.909 sys 0m18.867s 00:13:32.909 06:47:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:32.909 ************************************ 00:13:32.909 END TEST bdev_fio_rw_verify 00:13:32.909 ************************************ 00:13:32.909 06:47:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:32.909 06:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:32.909 06:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:32.909 06:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:32.909 06:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:32.909 06:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:13:32.909 06:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:13:32.909 06:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:13:32.909 06:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:13:32.909 06:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:32.909 06:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:13:32.909 06:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:13:32.909 06:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:32.909 06:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:13:32.909 06:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:13:32.909 06:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:13:32.909 06:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:13:32.909 06:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:32.910 06:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "f1b775f2-9521-4b7d-b194-fd51c5316f05"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f1b775f2-9521-4b7d-b194-fd51c5316f05",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "8075aa0f-9748-49b3-ba7d-4ed15a4566f1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8075aa0f-9748-49b3-ba7d-4ed15a4566f1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "0d7de33b-bf06-4bcd-a098-5f457ba332c5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0d7de33b-bf06-4bcd-a098-5f457ba332c5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "ee40d08c-febd-4542-8419-c3b52761d6a5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ee40d08c-febd-4542-8419-c3b52761d6a5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "860a173d-9454-4908-9d71-8a9cce65b0be"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "860a173d-9454-4908-9d71-8a9cce65b0be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "b874a3d9-cab7-4cae-b582-c442459f3969"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b874a3d9-cab7-4cae-b582-c442459f3969",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:32.910 06:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:32.910 06:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:32.910 /home/vagrant/spdk_repo/spdk 00:13:32.910 06:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:32.910 06:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:32.910 06:47:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:32.910 00:13:32.910 real 0m11.418s 00:13:32.910 user 0m28.549s 00:13:32.910 sys 0m18.944s 00:13:32.910 ************************************ 00:13:32.910 END TEST bdev_fio 00:13:32.910 ************************************ 00:13:32.910 06:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:32.910 06:47:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:32.910 06:47:24 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:32.910 06:47:24 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:32.910 06:47:24 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:32.910 06:47:24 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:32.910 06:47:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:32.910 ************************************ 00:13:32.910 START TEST bdev_verify 00:13:32.910 ************************************ 00:13:32.910 06:47:24 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:32.910 [2024-11-18 06:47:24.966056] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:13:32.910 [2024-11-18 06:47:24.966212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81806 ] 00:13:32.910 [2024-11-18 06:47:25.129101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:32.910 [2024-11-18 06:47:25.161723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.910 [2024-11-18 06:47:25.161776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.910 Running I/O for 5 seconds... 00:13:34.796 25600.00 IOPS, 100.00 MiB/s [2024-11-18T06:47:28.825Z] 23760.00 IOPS, 92.81 MiB/s [2024-11-18T06:47:29.771Z] 23584.00 IOPS, 92.13 MiB/s [2024-11-18T06:47:30.715Z] 23584.00 IOPS, 92.12 MiB/s [2024-11-18T06:47:30.715Z] 23385.60 IOPS, 91.35 MiB/s 00:13:37.628 Latency(us) 00:13:37.628 [2024-11-18T06:47:30.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.628 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:37.628 Verification LBA range: start 0x0 length 0xa0000 00:13:37.628 nvme0n1 : 5.05 1800.95 7.03 0.00 0.00 70881.22 13510.50 69770.63 00:13:37.628 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:37.628 Verification LBA range: start 0xa0000 length 0xa0000 00:13:37.628 nvme0n1 : 5.06 1870.66 7.31 0.00 0.00 68301.01 8570.09 68560.74 00:13:37.628 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:37.628 Verification LBA range: start 0x0 length 0xbd0bd 00:13:37.628 nvme1n1 : 5.05 2206.34 8.62 0.00 0.00 57502.11 8015.56 56058.49 00:13:37.628 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:37.628 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:37.628 nvme1n1 : 5.07 2349.13 9.18 0.00 0.00 54119.47 6402.36 69770.63 00:13:37.628 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:37.628 Verification LBA range: start 0x0 length 0x80000 00:13:37.628 nvme2n1 : 5.06 1870.96 7.31 0.00 0.00 67725.05 7864.32 67350.84 00:13:37.628 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:37.628 Verification LBA range: start 0x80000 length 0x80000 00:13:37.628 nvme2n1 : 5.06 1923.11 7.51 0.00 0.00 66206.18 9175.04 70980.53 00:13:37.628 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:37.628 Verification LBA range: start 0x0 length 0x80000 00:13:37.628 nvme2n2 : 5.06 1845.11 7.21 0.00 0.00 68603.08 6805.66 66140.95 00:13:37.628 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:37.628 Verification LBA range: start 0x80000 length 0x80000 00:13:37.628 nvme2n2 : 5.08 1891.25 7.39 0.00 0.00 67211.35 6856.07 70173.93 00:13:37.628 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:37.628 Verification LBA range: start 0x0 length 0x80000 00:13:37.628 nvme2n3 : 5.07 1816.30 7.09 0.00 0.00 69647.55 7208.96 68157.44 00:13:37.628 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:37.628 Verification LBA range: start 0x80000 length 0x80000 00:13:37.628 nvme2n3 : 5.07 1867.75 7.30 0.00 0.00 68005.11 8570.09 70173.93 00:13:37.628 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:37.628 Verification LBA range: start 0x0 length 0x20000 00:13:37.628 nvme3n1 : 5.08 1839.78 7.19 0.00 0.00 68624.83 3138.17 68157.44 00:13:37.628 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:37.628 Verification LBA range: start 0x20000 length 0x20000 00:13:37.628 nvme3n1 : 5.08 1865.51 7.29 0.00 0.00 67963.64 5570.56 65334.35 00:13:37.628 [2024-11-18T06:47:30.715Z] =================================================================================================================== 00:13:37.628 [2024-11-18T06:47:30.715Z] Total : 23146.85 90.42 0.00 0.00 65830.52 3138.17 70980.53 00:13:37.890 00:13:37.890 real 0m5.848s 00:13:37.890 user 0m9.406s 00:13:37.890 sys 0m1.349s 00:13:37.890 06:47:30 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:37.890 06:47:30 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:37.890 ************************************ 00:13:37.890 END TEST bdev_verify 00:13:37.890 ************************************ 00:13:37.890 06:47:30 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:37.890 06:47:30 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:37.890 06:47:30 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:37.890 06:47:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:37.890 ************************************ 00:13:37.890 START TEST bdev_verify_big_io 00:13:37.890 ************************************ 00:13:37.890 06:47:30 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:37.890 [2024-11-18 06:47:30.885485] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:13:37.890 [2024-11-18 06:47:30.885638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81898 ] 00:13:38.152 [2024-11-18 06:47:31.048516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:38.152 [2024-11-18 06:47:31.078904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.152 [2024-11-18 06:47:31.078953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.412 Running I/O for 5 seconds... 00:13:44.261 1792.00 IOPS, 112.00 MiB/s [2024-11-18T06:47:37.348Z] 2660.00 IOPS, 166.25 MiB/s [2024-11-18T06:47:37.348Z] 3253.33 IOPS, 203.33 MiB/s 00:13:44.261 Latency(us) 00:13:44.261 [2024-11-18T06:47:37.348Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.262 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:44.262 Verification LBA range: start 0x0 length 0xa000 00:13:44.262 nvme0n1 : 5.80 123.01 7.69 0.00 0.00 1003001.01 54848.59 2103604.78 00:13:44.262 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:44.262 Verification LBA range: start 0xa000 length 0xa000 00:13:44.262 nvme0n1 : 5.92 138.18 8.64 0.00 0.00 893511.19 26617.70 974369.08 00:13:44.262 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:44.262 Verification LBA range: start 0x0 length 0xbd0b 00:13:44.262 nvme1n1 : 5.89 152.23 9.51 0.00 0.00 784587.34 10687.41 916294.10 00:13:44.262 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:44.262 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:44.262 nvme1n1 : 5.93 126.80 7.93 0.00 0.00 952221.19 45371.08 1264743.98 00:13:44.262 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:44.262 Verification LBA range: start 0x0 length 0x8000 00:13:44.262 nvme2n1 : 5.81 115.71 7.23 0.00 0.00 1011327.31 103244.41 1703532.70 00:13:44.262 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:44.262 Verification LBA range: start 0x8000 length 0x8000 00:13:44.262 nvme2n1 : 5.92 137.74 8.61 0.00 0.00 840853.57 64124.46 1393799.48 00:13:44.262 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:44.262 Verification LBA range: start 0x0 length 0x8000 00:13:44.262 nvme2n2 : 5.90 141.59 8.85 0.00 0.00 802139.87 57268.38 2155226.98 00:13:44.262 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:44.262 Verification LBA range: start 0x8000 length 0x8000 00:13:44.262 nvme2n2 : 5.93 126.72 7.92 0.00 0.00 902637.73 49202.41 1477685.56 00:13:44.262 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:44.262 Verification LBA range: start 0x0 length 0x8000 00:13:44.262 nvme2n3 : 5.90 122.03 7.63 0.00 0.00 911672.98 138734.67 1348630.06 00:13:44.262 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:44.262 Verification LBA range: start 0x8000 length 0x8000 00:13:44.262 nvme2n3 : 5.94 137.45 8.59 0.00 0.00 806754.83 43757.88 1568024.42 00:13:44.262 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:44.262 Verification LBA range: start 0x0 length 0x2000 00:13:44.262 nvme3n1 : 5.90 195.17 12.20 0.00 0.00 558575.23 5797.42 877577.45 00:13:44.262 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:44.262 Verification LBA range: start 0x2000 length 0x2000 00:13:44.262 nvme3n1 : 5.94 156.10 9.76 0.00 0.00 691183.35 3402.83 683994.19 00:13:44.262 [2024-11-18T06:47:37.349Z] =================================================================================================================== 00:13:44.262 [2024-11-18T06:47:37.349Z] Total : 1672.73 104.55 0.00 0.00 828692.90 3402.83 2155226.98 00:13:44.522 00:13:44.522 real 0m6.747s 00:13:44.522 user 0m12.327s 00:13:44.522 sys 0m0.467s 00:13:44.522 06:47:37 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:44.522 ************************************ 00:13:44.522 END TEST bdev_verify_big_io 00:13:44.523 ************************************ 00:13:44.523 06:47:37 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.783 06:47:37 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:44.783 06:47:37 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:44.783 06:47:37 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:44.783 06:47:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:44.783 ************************************ 00:13:44.783 START TEST bdev_write_zeroes 00:13:44.783 ************************************ 00:13:44.784 06:47:37 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:44.784 [2024-11-18 06:47:37.695078] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:13:44.784 [2024-11-18 06:47:37.695220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82003 ] 00:13:44.784 [2024-11-18 06:47:37.856928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.044 [2024-11-18 06:47:37.885540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.045 Running I/O for 1 seconds... 00:13:46.429 92896.00 IOPS, 362.88 MiB/s 00:13:46.429 Latency(us) 00:13:46.429 [2024-11-18T06:47:39.516Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:46.429 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:46.429 nvme0n1 : 1.02 15261.00 59.61 0.00 0.00 8377.90 5545.35 27827.59 00:13:46.429 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:46.429 nvme1n1 : 1.02 15711.85 61.37 0.00 0.00 8129.37 6099.89 20164.92 00:13:46.429 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:46.429 nvme2n1 : 1.03 15101.37 58.99 0.00 0.00 8451.79 6301.54 20870.70 00:13:46.429 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:46.429 nvme2n2 : 1.03 15084.20 58.92 0.00 0.00 8408.06 5520.15 20870.70 00:13:46.429 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:46.429 nvme2n3 : 1.03 15067.28 58.86 0.00 0.00 8409.88 5520.15 21173.17 00:13:46.429 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:46.429 nvme3n1 : 1.03 15132.50 59.11 0.00 0.00 8366.40 5368.91 29440.79 00:13:46.429 [2024-11-18T06:47:39.516Z] =================================================================================================================== 00:13:46.429 [2024-11-18T06:47:39.516Z] Total : 91358.19 356.87 0.00 0.00 8355.80 5368.91 29440.79 00:13:46.429 00:13:46.429 real 0m1.739s 00:13:46.429 user 0m1.068s 00:13:46.429 sys 0m0.469s 00:13:46.429 06:47:39 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:46.429 ************************************ 00:13:46.429 END TEST bdev_write_zeroes 00:13:46.429 ************************************ 00:13:46.429 06:47:39 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:46.429 06:47:39 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:46.429 06:47:39 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:46.429 06:47:39 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:46.429 06:47:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:46.429 ************************************ 00:13:46.429 START TEST bdev_json_nonenclosed 00:13:46.429 ************************************ 00:13:46.429 06:47:39 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:46.429 [2024-11-18 06:47:39.507178] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:13:46.429 [2024-11-18 06:47:39.507335] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82034 ] 00:13:46.690 [2024-11-18 06:47:39.666675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.690 [2024-11-18 06:47:39.697275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.690 [2024-11-18 06:47:39.697385] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:46.690 [2024-11-18 06:47:39.697403] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:46.690 [2024-11-18 06:47:39.697417] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:46.952 00:13:46.952 real 0m0.340s 00:13:46.952 user 0m0.122s 00:13:46.952 sys 0m0.114s 00:13:46.952 06:47:39 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:46.952 06:47:39 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:46.952 ************************************ 00:13:46.952 END TEST bdev_json_nonenclosed 00:13:46.952 ************************************ 00:13:46.952 06:47:39 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:46.952 06:47:39 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:46.952 06:47:39 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:46.952 06:47:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:46.952 ************************************ 00:13:46.952 START TEST bdev_json_nonarray 00:13:46.952 ************************************ 00:13:46.952 06:47:39 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:46.952 [2024-11-18 06:47:39.911668] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:13:46.952 [2024-11-18 06:47:39.911817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82065 ] 00:13:47.214 [2024-11-18 06:47:40.072329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.214 [2024-11-18 06:47:40.104596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.214 [2024-11-18 06:47:40.104722] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:47.214 [2024-11-18 06:47:40.104740] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:47.214 [2024-11-18 06:47:40.104753] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:47.214 00:13:47.214 real 0m0.339s 00:13:47.214 user 0m0.129s 00:13:47.214 sys 0m0.105s 00:13:47.214 ************************************ 00:13:47.214 END TEST bdev_json_nonarray 00:13:47.214 ************************************ 00:13:47.214 06:47:40 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:47.214 06:47:40 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:47.214 06:47:40 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:47.214 06:47:40 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:47.214 06:47:40 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:47.214 06:47:40 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:47.214 06:47:40 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:47.214 06:47:40 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:47.214 06:47:40 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:47.214 06:47:40 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:47.214 06:47:40 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:47.214 06:47:40 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:47.214 06:47:40 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:47.214 06:47:40 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:47.788 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:49.704 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:50.277 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:50.277 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:50.277 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:50.277 00:13:50.277 real 0m49.408s 00:13:50.277 user 1m17.284s 00:13:50.277 sys 0m31.411s 00:13:50.277 ************************************ 00:13:50.277 END TEST blockdev_xnvme 00:13:50.277 ************************************ 00:13:50.277 06:47:43 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:50.277 06:47:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:50.539 06:47:43 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:50.539 06:47:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:50.539 06:47:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:50.539 06:47:43 -- common/autotest_common.sh@10 -- # set +x 00:13:50.539 ************************************ 00:13:50.539 START TEST ublk 00:13:50.539 ************************************ 00:13:50.539 06:47:43 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:50.539 * Looking for test storage... 00:13:50.539 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:50.539 06:47:43 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:50.539 06:47:43 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:13:50.539 06:47:43 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:50.539 06:47:43 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:50.539 06:47:43 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:50.539 06:47:43 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:50.539 06:47:43 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:50.539 06:47:43 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:13:50.539 06:47:43 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:13:50.539 06:47:43 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:13:50.539 06:47:43 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:13:50.539 06:47:43 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:13:50.539 06:47:43 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:13:50.539 06:47:43 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:13:50.539 06:47:43 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:50.539 06:47:43 ublk -- scripts/common.sh@344 -- # case "$op" in 00:13:50.539 06:47:43 ublk -- scripts/common.sh@345 -- # : 1 00:13:50.539 06:47:43 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:50.539 06:47:43 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:50.539 06:47:43 ublk -- scripts/common.sh@365 -- # decimal 1 00:13:50.539 06:47:43 ublk -- scripts/common.sh@353 -- # local d=1 00:13:50.539 06:47:43 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:50.539 06:47:43 ublk -- scripts/common.sh@355 -- # echo 1 00:13:50.539 06:47:43 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:13:50.539 06:47:43 ublk -- scripts/common.sh@366 -- # decimal 2 00:13:50.540 06:47:43 ublk -- scripts/common.sh@353 -- # local d=2 00:13:50.540 06:47:43 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:50.540 06:47:43 ublk -- scripts/common.sh@355 -- # echo 2 00:13:50.540 06:47:43 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:13:50.540 06:47:43 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:50.540 06:47:43 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:50.540 06:47:43 ublk -- scripts/common.sh@368 -- # return 0 00:13:50.540 06:47:43 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:50.540 06:47:43 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:50.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.540 --rc genhtml_branch_coverage=1 00:13:50.540 --rc genhtml_function_coverage=1 00:13:50.540 --rc genhtml_legend=1 00:13:50.540 --rc geninfo_all_blocks=1 00:13:50.540 --rc geninfo_unexecuted_blocks=1 00:13:50.540 00:13:50.540 ' 00:13:50.540 06:47:43 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:50.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.540 --rc genhtml_branch_coverage=1 00:13:50.540 --rc genhtml_function_coverage=1 00:13:50.540 --rc genhtml_legend=1 00:13:50.540 --rc geninfo_all_blocks=1 00:13:50.540 --rc geninfo_unexecuted_blocks=1 00:13:50.540 00:13:50.540 ' 00:13:50.540 06:47:43 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:50.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.540 --rc genhtml_branch_coverage=1 00:13:50.540 --rc genhtml_function_coverage=1 00:13:50.540 --rc genhtml_legend=1 00:13:50.540 --rc geninfo_all_blocks=1 00:13:50.540 --rc geninfo_unexecuted_blocks=1 00:13:50.540 00:13:50.540 ' 00:13:50.540 06:47:43 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:50.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.540 --rc genhtml_branch_coverage=1 00:13:50.540 --rc genhtml_function_coverage=1 00:13:50.540 --rc genhtml_legend=1 00:13:50.540 --rc geninfo_all_blocks=1 00:13:50.540 --rc geninfo_unexecuted_blocks=1 00:13:50.540 00:13:50.540 ' 00:13:50.540 06:47:43 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:50.540 06:47:43 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:50.540 06:47:43 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:50.540 06:47:43 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:50.540 06:47:43 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:50.540 06:47:43 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:50.540 06:47:43 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:50.540 06:47:43 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:50.540 06:47:43 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:50.540 06:47:43 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:50.540 06:47:43 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:50.540 06:47:43 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:50.540 06:47:43 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:50.540 06:47:43 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:50.540 06:47:43 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:50.540 06:47:43 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:50.540 06:47:43 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:50.540 06:47:43 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:50.540 06:47:43 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:50.540 06:47:43 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:50.540 06:47:43 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:50.540 06:47:43 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:50.540 06:47:43 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:50.540 ************************************ 00:13:50.540 START TEST test_save_ublk_config 00:13:50.540 ************************************ 00:13:50.540 06:47:43 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:13:50.540 06:47:43 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:50.540 06:47:43 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=82346 00:13:50.540 06:47:43 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:50.540 06:47:43 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:50.540 06:47:43 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 82346 00:13:50.540 06:47:43 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 82346 ']' 00:13:50.540 06:47:43 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.540 06:47:43 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:50.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.540 06:47:43 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.540 06:47:43 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:50.540 06:47:43 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:50.801 [2024-11-18 06:47:43.656543] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:13:50.801 [2024-11-18 06:47:43.656686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82346 ] 00:13:50.801 [2024-11-18 06:47:43.821862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.801 [2024-11-18 06:47:43.850822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.747 06:47:44 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:51.747 06:47:44 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:13:51.747 06:47:44 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:51.747 06:47:44 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:51.747 06:47:44 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.747 06:47:44 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:51.747 [2024-11-18 06:47:44.506998] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:51.747 [2024-11-18 06:47:44.507955] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:51.747 malloc0 00:13:51.747 [2024-11-18 06:47:44.539149] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:51.747 [2024-11-18 06:47:44.539250] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:51.747 [2024-11-18 06:47:44.539259] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:51.747 [2024-11-18 06:47:44.539274] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:51.747 [2024-11-18 06:47:44.548097] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:51.747 [2024-11-18 06:47:44.548138] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:51.747 [2024-11-18 06:47:44.555012] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:51.747 [2024-11-18 06:47:44.555143] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:51.747 [2024-11-18 06:47:44.572002] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:51.747 0 00:13:51.747 06:47:44 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.747 06:47:44 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:51.747 06:47:44 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.747 06:47:44 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:52.009 06:47:44 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.009 06:47:44 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:13:52.009 "subsystems": [ 00:13:52.009 { 00:13:52.009 "subsystem": "fsdev", 00:13:52.009 "config": [ 00:13:52.009 { 00:13:52.009 "method": "fsdev_set_opts", 00:13:52.009 "params": { 00:13:52.009 "fsdev_io_pool_size": 65535, 00:13:52.009 "fsdev_io_cache_size": 256 00:13:52.009 } 00:13:52.009 } 00:13:52.009 ] 00:13:52.009 }, 00:13:52.009 { 00:13:52.009 "subsystem": "keyring", 00:13:52.009 "config": [] 00:13:52.009 }, 00:13:52.009 { 00:13:52.009 "subsystem": "iobuf", 00:13:52.009 "config": [ 00:13:52.009 { 00:13:52.009 "method": "iobuf_set_options", 00:13:52.009 "params": { 00:13:52.009 "small_pool_count": 8192, 00:13:52.009 "large_pool_count": 1024, 00:13:52.009 "small_bufsize": 8192, 00:13:52.009 "large_bufsize": 135168, 00:13:52.009 "enable_numa": false 00:13:52.009 } 00:13:52.009 } 00:13:52.009 ] 00:13:52.009 }, 00:13:52.009 { 00:13:52.009 "subsystem": "sock", 00:13:52.009 "config": [ 00:13:52.009 { 00:13:52.009 "method": "sock_set_default_impl", 00:13:52.009 "params": { 00:13:52.009 "impl_name": "posix" 00:13:52.009 } 00:13:52.009 }, 00:13:52.009 { 00:13:52.009 "method": "sock_impl_set_options", 00:13:52.009 "params": { 00:13:52.009 "impl_name": "ssl", 00:13:52.009 "recv_buf_size": 4096, 00:13:52.009 "send_buf_size": 4096, 00:13:52.009 "enable_recv_pipe": true, 00:13:52.009 "enable_quickack": false, 00:13:52.009 "enable_placement_id": 0, 00:13:52.009 "enable_zerocopy_send_server": true, 00:13:52.009 "enable_zerocopy_send_client": false, 00:13:52.009 "zerocopy_threshold": 0, 00:13:52.009 "tls_version": 0, 00:13:52.009 "enable_ktls": false 00:13:52.009 } 00:13:52.009 }, 00:13:52.009 { 00:13:52.009 "method": "sock_impl_set_options", 00:13:52.009 "params": { 00:13:52.009 "impl_name": "posix", 00:13:52.009 "recv_buf_size": 2097152, 00:13:52.009 "send_buf_size": 2097152, 00:13:52.009 "enable_recv_pipe": true, 00:13:52.009 "enable_quickack": false, 00:13:52.009 "enable_placement_id": 0, 00:13:52.009 "enable_zerocopy_send_server": true, 00:13:52.009 "enable_zerocopy_send_client": false, 00:13:52.009 "zerocopy_threshold": 0, 00:13:52.009 "tls_version": 0, 00:13:52.009 "enable_ktls": false 00:13:52.009 } 00:13:52.009 } 00:13:52.009 ] 00:13:52.009 }, 00:13:52.009 { 00:13:52.009 "subsystem": "vmd", 00:13:52.009 "config": [] 00:13:52.009 }, 00:13:52.009 { 00:13:52.009 "subsystem": "accel", 00:13:52.009 "config": [ 00:13:52.009 { 00:13:52.009 "method": "accel_set_options", 00:13:52.009 "params": { 00:13:52.009 "small_cache_size": 128, 00:13:52.009 "large_cache_size": 16, 00:13:52.009 "task_count": 2048, 00:13:52.009 "sequence_count": 2048, 00:13:52.009 "buf_count": 2048 00:13:52.009 } 00:13:52.009 } 00:13:52.009 ] 00:13:52.009 }, 00:13:52.009 { 00:13:52.009 "subsystem": "bdev", 00:13:52.009 "config": [ 00:13:52.009 { 00:13:52.009 "method": "bdev_set_options", 00:13:52.009 "params": { 00:13:52.009 "bdev_io_pool_size": 65535, 00:13:52.009 "bdev_io_cache_size": 256, 00:13:52.009 "bdev_auto_examine": true, 00:13:52.009 "iobuf_small_cache_size": 128, 00:13:52.009 "iobuf_large_cache_size": 16 00:13:52.009 } 00:13:52.009 }, 00:13:52.009 { 00:13:52.009 "method": "bdev_raid_set_options", 00:13:52.009 "params": { 00:13:52.009 "process_window_size_kb": 1024, 00:13:52.009 "process_max_bandwidth_mb_sec": 0 00:13:52.009 } 00:13:52.009 }, 00:13:52.009 { 00:13:52.009 "method": "bdev_iscsi_set_options", 00:13:52.009 "params": { 00:13:52.009 "timeout_sec": 30 00:13:52.009 } 00:13:52.009 }, 00:13:52.009 { 00:13:52.009 "method": "bdev_nvme_set_options", 00:13:52.009 "params": { 00:13:52.009 "action_on_timeout": "none", 00:13:52.009 "timeout_us": 0, 00:13:52.009 "timeout_admin_us": 0, 00:13:52.009 "keep_alive_timeout_ms": 10000, 00:13:52.009 "arbitration_burst": 0, 00:13:52.009 "low_priority_weight": 0, 00:13:52.009 "medium_priority_weight": 0, 00:13:52.009 "high_priority_weight": 0, 00:13:52.009 "nvme_adminq_poll_period_us": 10000, 00:13:52.009 "nvme_ioq_poll_period_us": 0, 00:13:52.009 "io_queue_requests": 0, 00:13:52.009 "delay_cmd_submit": true, 00:13:52.009 "transport_retry_count": 4, 00:13:52.009 "bdev_retry_count": 3, 00:13:52.009 "transport_ack_timeout": 0, 00:13:52.009 "ctrlr_loss_timeout_sec": 0, 00:13:52.009 "reconnect_delay_sec": 0, 00:13:52.009 "fast_io_fail_timeout_sec": 0, 00:13:52.009 "disable_auto_failback": false, 00:13:52.009 "generate_uuids": false, 00:13:52.009 "transport_tos": 0, 00:13:52.009 "nvme_error_stat": false, 00:13:52.009 "rdma_srq_size": 0, 00:13:52.009 "io_path_stat": false, 00:13:52.009 "allow_accel_sequence": false, 00:13:52.009 "rdma_max_cq_size": 0, 00:13:52.009 "rdma_cm_event_timeout_ms": 0, 00:13:52.009 "dhchap_digests": [ 00:13:52.009 "sha256", 00:13:52.009 "sha384", 00:13:52.009 "sha512" 00:13:52.009 ], 00:13:52.009 "dhchap_dhgroups": [ 00:13:52.009 "null", 00:13:52.009 "ffdhe2048", 00:13:52.009 "ffdhe3072", 00:13:52.009 "ffdhe4096", 00:13:52.009 "ffdhe6144", 00:13:52.009 "ffdhe8192" 00:13:52.009 ] 00:13:52.009 } 00:13:52.009 }, 00:13:52.009 { 00:13:52.009 "method": "bdev_nvme_set_hotplug", 00:13:52.009 "params": { 00:13:52.009 "period_us": 100000, 00:13:52.009 "enable": false 00:13:52.009 } 00:13:52.009 }, 00:13:52.009 { 00:13:52.009 "method": "bdev_malloc_create", 00:13:52.009 "params": { 00:13:52.009 "name": "malloc0", 00:13:52.009 "num_blocks": 8192, 00:13:52.009 "block_size": 4096, 00:13:52.009 "physical_block_size": 4096, 00:13:52.009 "uuid": "1b676ac8-0e7e-4edd-88fa-132758f1e105", 00:13:52.009 "optimal_io_boundary": 0, 00:13:52.009 "md_size": 0, 00:13:52.009 "dif_type": 0, 00:13:52.009 "dif_is_head_of_md": false, 00:13:52.009 "dif_pi_format": 0 00:13:52.009 } 00:13:52.009 }, 00:13:52.009 { 00:13:52.009 "method": "bdev_wait_for_examine" 00:13:52.009 } 00:13:52.009 ] 00:13:52.009 }, 00:13:52.009 { 00:13:52.009 "subsystem": "scsi", 00:13:52.009 "config": null 00:13:52.009 }, 00:13:52.009 { 00:13:52.009 "subsystem": "scheduler", 00:13:52.009 "config": [ 00:13:52.009 { 00:13:52.009 "method": "framework_set_scheduler", 00:13:52.009 "params": { 00:13:52.009 "name": "static" 00:13:52.009 } 00:13:52.009 } 00:13:52.009 ] 00:13:52.009 }, 00:13:52.009 { 00:13:52.009 "subsystem": "vhost_scsi", 00:13:52.009 "config": [] 00:13:52.010 }, 00:13:52.010 { 00:13:52.010 "subsystem": "vhost_blk", 00:13:52.010 "config": [] 00:13:52.010 }, 00:13:52.010 { 00:13:52.010 "subsystem": "ublk", 00:13:52.010 "config": [ 00:13:52.010 { 00:13:52.010 "method": "ublk_create_target", 00:13:52.010 "params": { 00:13:52.010 "cpumask": "1" 00:13:52.010 } 00:13:52.010 }, 00:13:52.010 { 00:13:52.010 "method": "ublk_start_disk", 00:13:52.010 "params": { 00:13:52.010 "bdev_name": "malloc0", 00:13:52.010 "ublk_id": 0, 00:13:52.010 "num_queues": 1, 00:13:52.010 "queue_depth": 128 00:13:52.010 } 00:13:52.010 } 00:13:52.010 ] 00:13:52.010 }, 00:13:52.010 { 00:13:52.010 "subsystem": "nbd", 00:13:52.010 "config": [] 00:13:52.010 }, 00:13:52.010 { 00:13:52.010 "subsystem": "nvmf", 00:13:52.010 "config": [ 00:13:52.010 { 00:13:52.010 "method": "nvmf_set_config", 00:13:52.010 "params": { 00:13:52.010 "discovery_filter": "match_any", 00:13:52.010 "admin_cmd_passthru": { 00:13:52.010 "identify_ctrlr": false 00:13:52.010 }, 00:13:52.010 "dhchap_digests": [ 00:13:52.010 "sha256", 00:13:52.010 "sha384", 00:13:52.010 "sha512" 00:13:52.010 ], 00:13:52.010 "dhchap_dhgroups": [ 00:13:52.010 "null", 00:13:52.010 "ffdhe2048", 00:13:52.010 "ffdhe3072", 00:13:52.010 "ffdhe4096", 00:13:52.010 "ffdhe6144", 00:13:52.010 "ffdhe8192" 00:13:52.010 ] 00:13:52.010 } 00:13:52.010 }, 00:13:52.010 { 00:13:52.010 "method": "nvmf_set_max_subsystems", 00:13:52.010 "params": { 00:13:52.010 "max_subsystems": 1024 00:13:52.010 } 00:13:52.010 }, 00:13:52.010 { 00:13:52.010 "method": "nvmf_set_crdt", 00:13:52.010 "params": { 00:13:52.010 "crdt1": 0, 00:13:52.010 "crdt2": 0, 00:13:52.010 "crdt3": 0 00:13:52.010 } 00:13:52.010 } 00:13:52.010 ] 00:13:52.010 }, 00:13:52.010 { 00:13:52.010 "subsystem": "iscsi", 00:13:52.010 "config": [ 00:13:52.010 { 00:13:52.010 "method": "iscsi_set_options", 00:13:52.010 "params": { 00:13:52.010 "node_base": "iqn.2016-06.io.spdk", 00:13:52.010 "max_sessions": 128, 00:13:52.010 "max_connections_per_session": 2, 00:13:52.010 "max_queue_depth": 64, 00:13:52.010 "default_time2wait": 2, 00:13:52.010 "default_time2retain": 20, 00:13:52.010 "first_burst_length": 8192, 00:13:52.010 "immediate_data": true, 00:13:52.010 "allow_duplicated_isid": false, 00:13:52.010 "error_recovery_level": 0, 00:13:52.010 "nop_timeout": 60, 00:13:52.010 "nop_in_interval": 30, 00:13:52.010 "disable_chap": false, 00:13:52.010 "require_chap": false, 00:13:52.010 "mutual_chap": false, 00:13:52.010 "chap_group": 0, 00:13:52.010 "max_large_datain_per_connection": 64, 00:13:52.010 "max_r2t_per_connection": 4, 00:13:52.010 "pdu_pool_size": 36864, 00:13:52.010 "immediate_data_pool_size": 16384, 00:13:52.010 "data_out_pool_size": 2048 00:13:52.010 } 00:13:52.010 } 00:13:52.010 ] 00:13:52.010 } 00:13:52.010 ] 00:13:52.010 }' 00:13:52.010 06:47:44 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 82346 00:13:52.010 06:47:44 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 82346 ']' 00:13:52.010 06:47:44 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 82346 00:13:52.010 06:47:44 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:13:52.010 06:47:44 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:52.010 06:47:44 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82346 00:13:52.010 killing process with pid 82346 00:13:52.010 06:47:44 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:52.010 06:47:44 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:52.010 06:47:44 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82346' 00:13:52.010 06:47:44 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 82346 00:13:52.010 06:47:44 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 82346 00:13:52.271 [2024-11-18 06:47:45.171651] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:52.271 [2024-11-18 06:47:45.209033] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:52.271 [2024-11-18 06:47:45.209191] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:52.271 [2024-11-18 06:47:45.217018] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:52.272 [2024-11-18 06:47:45.217095] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:52.272 [2024-11-18 06:47:45.217104] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:52.272 [2024-11-18 06:47:45.217145] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:52.272 [2024-11-18 06:47:45.217301] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:52.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.844 06:47:45 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=82390 00:13:52.844 06:47:45 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 82390 00:13:52.844 06:47:45 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 82390 ']' 00:13:52.844 06:47:45 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.844 06:47:45 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:52.844 06:47:45 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.844 06:47:45 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:52.844 06:47:45 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:52.844 06:47:45 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:13:52.844 "subsystems": [ 00:13:52.844 { 00:13:52.844 "subsystem": "fsdev", 00:13:52.844 "config": [ 00:13:52.844 { 00:13:52.844 "method": "fsdev_set_opts", 00:13:52.844 "params": { 00:13:52.844 "fsdev_io_pool_size": 65535, 00:13:52.844 "fsdev_io_cache_size": 256 00:13:52.844 } 00:13:52.844 } 00:13:52.844 ] 00:13:52.844 }, 00:13:52.844 { 00:13:52.844 "subsystem": "keyring", 00:13:52.844 "config": [] 00:13:52.844 }, 00:13:52.844 { 00:13:52.844 "subsystem": "iobuf", 00:13:52.844 "config": [ 00:13:52.844 { 00:13:52.844 "method": "iobuf_set_options", 00:13:52.844 "params": { 00:13:52.844 "small_pool_count": 8192, 00:13:52.844 "large_pool_count": 1024, 00:13:52.844 "small_bufsize": 8192, 00:13:52.844 "large_bufsize": 135168, 00:13:52.844 "enable_numa": false 00:13:52.844 } 00:13:52.844 } 00:13:52.844 ] 00:13:52.844 }, 00:13:52.844 { 00:13:52.844 "subsystem": "sock", 00:13:52.844 "config": [ 00:13:52.844 { 00:13:52.844 "method": "sock_set_default_impl", 00:13:52.844 "params": { 00:13:52.844 "impl_name": "posix" 00:13:52.844 } 00:13:52.844 }, 00:13:52.844 { 00:13:52.844 "method": "sock_impl_set_options", 00:13:52.844 "params": { 00:13:52.844 "impl_name": "ssl", 00:13:52.844 "recv_buf_size": 4096, 00:13:52.844 "send_buf_size": 4096, 00:13:52.844 "enable_recv_pipe": true, 00:13:52.844 "enable_quickack": false, 00:13:52.844 "enable_placement_id": 0, 00:13:52.844 "enable_zerocopy_send_server": true, 00:13:52.844 "enable_zerocopy_send_client": false, 00:13:52.844 "zerocopy_threshold": 0, 00:13:52.844 "tls_version": 0, 00:13:52.844 "enable_ktls": false 00:13:52.844 } 00:13:52.844 }, 00:13:52.844 { 00:13:52.844 "method": "sock_impl_set_options", 00:13:52.844 "params": { 00:13:52.844 "impl_name": "posix", 00:13:52.844 "recv_buf_size": 2097152, 00:13:52.844 "send_buf_size": 2097152, 00:13:52.844 "enable_recv_pipe": true, 00:13:52.844 "enable_quickack": false, 00:13:52.845 "enable_placement_id": 0, 00:13:52.845 "enable_zerocopy_send_server": true, 00:13:52.845 "enable_zerocopy_send_client": false, 00:13:52.845 "zerocopy_threshold": 0, 00:13:52.845 "tls_version": 0, 00:13:52.845 "enable_ktls": false 00:13:52.845 } 00:13:52.845 } 00:13:52.845 ] 00:13:52.845 }, 00:13:52.845 { 00:13:52.845 "subsystem": "vmd", 00:13:52.845 "config": [] 00:13:52.845 }, 00:13:52.845 { 00:13:52.845 "subsystem": "accel", 00:13:52.845 "config": [ 00:13:52.845 { 00:13:52.845 "method": "accel_set_options", 00:13:52.845 "params": { 00:13:52.845 "small_cache_size": 128, 00:13:52.845 "large_cache_size": 16, 00:13:52.845 "task_count": 2048, 00:13:52.845 "sequence_count": 2048, 00:13:52.845 "buf_count": 2048 00:13:52.845 } 00:13:52.845 } 00:13:52.845 ] 00:13:52.845 }, 00:13:52.845 { 00:13:52.845 "subsystem": "bdev", 00:13:52.845 "config": [ 00:13:52.845 { 00:13:52.845 "method": "bdev_set_options", 00:13:52.845 "params": { 00:13:52.845 "bdev_io_pool_size": 65535, 00:13:52.845 "bdev_io_cache_size": 256, 00:13:52.845 "bdev_auto_examine": true, 00:13:52.845 "iobuf_small_cache_size": 128, 00:13:52.845 "iobuf_large_cache_size": 16 00:13:52.845 } 00:13:52.845 }, 00:13:52.845 { 00:13:52.845 "method": "bdev_raid_set_options", 00:13:52.845 "params": { 00:13:52.845 "process_window_size_kb": 1024, 00:13:52.845 "process_max_bandwidth_mb_sec": 0 00:13:52.845 } 00:13:52.845 }, 00:13:52.845 { 00:13:52.845 "method": "bdev_iscsi_set_options", 00:13:52.845 "params": { 00:13:52.845 "timeout_sec": 30 00:13:52.845 } 00:13:52.845 }, 00:13:52.845 { 00:13:52.845 "method": "bdev_nvme_set_options", 00:13:52.845 "params": { 00:13:52.845 "action_on_timeout": "none", 00:13:52.845 "timeout_us": 0, 00:13:52.845 "timeout_admin_us": 0, 00:13:52.845 "keep_alive_timeout_ms": 10000, 00:13:52.845 "arbitration_burst": 0, 00:13:52.845 "low_priority_weight": 0, 00:13:52.845 "medium_priority_weight": 0, 00:13:52.845 "high_priority_weight": 0, 00:13:52.845 "nvme_adminq_poll_period_us": 10000, 00:13:52.845 "nvme_ioq_poll_period_us": 0, 00:13:52.845 "io_queue_requests": 0, 00:13:52.845 "delay_cmd_submit": true, 00:13:52.845 "transport_retry_count": 4, 00:13:52.845 "bdev_retry_count": 3, 00:13:52.845 "transport_ack_timeout": 0, 00:13:52.845 "ctrlr_loss_timeout_sec": 0, 00:13:52.845 "reconnect_delay_sec": 0, 00:13:52.845 "fast_io_fail_timeout_sec": 0, 00:13:52.845 "disable_auto_failback": false, 00:13:52.845 "generate_uuids": false, 00:13:52.845 "transport_tos": 0, 00:13:52.845 "nvme_error_stat": false, 00:13:52.845 "rdma_srq_size": 0, 00:13:52.845 "io_path_stat": false, 00:13:52.845 "allow_accel_sequence": false, 00:13:52.845 "rdma_max_cq_size": 0, 00:13:52.845 "rdma_cm_event_timeout_ms": 0, 00:13:52.845 "dhchap_digests": [ 00:13:52.845 "sha256", 00:13:52.845 "sha384", 00:13:52.845 "sha512" 00:13:52.845 ], 00:13:52.845 "dhchap_dhgroups": [ 00:13:52.845 "null", 00:13:52.845 "ffdhe2048", 00:13:52.845 "ffdhe3072", 00:13:52.845 "ffdhe4096", 00:13:52.845 "ffdhe6144", 00:13:52.845 "ffdhe8192" 00:13:52.845 ] 00:13:52.845 } 00:13:52.845 }, 00:13:52.845 { 00:13:52.845 "method": "bdev_nvme_set_hotplug", 00:13:52.845 "params": { 00:13:52.845 "period_us": 100000, 00:13:52.845 "enable": false 00:13:52.845 } 00:13:52.845 }, 00:13:52.845 { 00:13:52.845 "method": "bdev_malloc_create", 00:13:52.845 "params": { 00:13:52.845 "name": "malloc0", 00:13:52.845 "num_blocks": 8192, 00:13:52.845 "block_size": 4096, 00:13:52.845 "physical_block_size": 4096, 00:13:52.845 "uuid": "1b676ac8-0e7e-4edd-88fa-132758f1e105", 00:13:52.845 "optimal_io_boundary": 0, 00:13:52.845 "md_size": 0, 00:13:52.845 "dif_type": 0, 00:13:52.845 "dif_is_head_of_md": false, 00:13:52.845 "dif_pi_format": 0 00:13:52.845 } 00:13:52.845 }, 00:13:52.845 { 00:13:52.845 "method": "bdev_wait_for_examine" 00:13:52.845 } 00:13:52.845 ] 00:13:52.845 }, 00:13:52.845 { 00:13:52.845 "subsystem": "scsi", 00:13:52.845 "config": null 00:13:52.845 }, 00:13:52.845 { 00:13:52.845 "subsystem": "scheduler", 00:13:52.845 "config": [ 00:13:52.845 { 00:13:52.845 "method": "framework_set_scheduler", 00:13:52.845 "params": { 00:13:52.845 "name": "static" 00:13:52.845 } 00:13:52.845 } 00:13:52.845 ] 00:13:52.845 }, 00:13:52.845 { 00:13:52.845 "subsystem": "vhost_scsi", 00:13:52.845 "config": [] 00:13:52.845 }, 00:13:52.845 { 00:13:52.845 "subsystem": "vhost_blk", 00:13:52.845 "config": [] 00:13:52.845 }, 00:13:52.845 { 00:13:52.845 "subsystem": "ublk", 00:13:52.845 "config": [ 00:13:52.845 { 00:13:52.845 "method": "ublk_create_target", 00:13:52.845 "params": { 00:13:52.845 "cpumask": "1" 00:13:52.845 } 00:13:52.845 }, 00:13:52.845 { 00:13:52.845 "method": "ublk_start_disk", 00:13:52.845 "params": { 00:13:52.845 "bdev_name": "malloc0", 00:13:52.845 "ublk_id": 0, 00:13:52.845 "num_queues": 1, 00:13:52.845 "queue_depth": 128 00:13:52.845 } 00:13:52.845 } 00:13:52.845 ] 00:13:52.845 }, 00:13:52.845 { 00:13:52.845 "subsystem": "nbd", 00:13:52.845 "config": [] 00:13:52.845 }, 00:13:52.845 { 00:13:52.845 "subsystem": "nvmf", 00:13:52.845 "config": [ 00:13:52.845 { 00:13:52.845 "method": "nvmf_set_config", 00:13:52.845 "params": { 00:13:52.845 "discovery_filter": "match_any", 00:13:52.845 "admin_cmd_passthru": { 00:13:52.845 "identify_ctrlr": false 00:13:52.845 }, 00:13:52.845 "dhchap_digests": [ 00:13:52.845 "sha256", 00:13:52.845 "sha384", 00:13:52.845 "sha512" 00:13:52.845 ], 00:13:52.845 "dhchap_dhgroups": [ 00:13:52.845 "null", 00:13:52.845 "ffdhe2048", 00:13:52.845 "ffdhe3072", 00:13:52.845 "ffdhe4096", 00:13:52.845 "ffdhe6144", 00:13:52.845 "ffdhe8192" 00:13:52.845 ] 00:13:52.845 } 00:13:52.845 }, 00:13:52.845 { 00:13:52.845 "method": "nvmf_set_max_subsystems", 00:13:52.845 "params": { 00:13:52.845 "max_subsystems": 1024 00:13:52.845 } 00:13:52.845 }, 00:13:52.845 { 00:13:52.845 "method": "nvmf_set_crdt", 00:13:52.845 "params": { 00:13:52.845 "crdt1": 0, 00:13:52.845 "crdt2": 0, 00:13:52.845 "crdt3": 0 00:13:52.845 } 00:13:52.845 } 00:13:52.845 ] 00:13:52.845 }, 00:13:52.845 { 00:13:52.845 "subsystem": "iscsi", 00:13:52.845 "config": [ 00:13:52.845 { 00:13:52.845 "method": "iscsi_set_options", 00:13:52.845 "params": { 00:13:52.845 "node_base": "iqn.2016-06.io.spdk", 00:13:52.845 "max_sessions": 128, 00:13:52.845 "max_connections_per_session": 2, 00:13:52.845 "max_queue_depth": 64, 00:13:52.845 "default_time2wait": 2, 00:13:52.845 "default_time2retain": 20, 00:13:52.845 "first_burst_length": 8192, 00:13:52.845 "immediate_data": true, 00:13:52.845 "allow_duplicated_isid": false, 00:13:52.845 "error_recovery_level": 0, 00:13:52.845 "nop_timeout": 60, 00:13:52.845 "nop_in_interval": 30, 00:13:52.845 "disable_chap": false, 00:13:52.845 "require_chap": false, 00:13:52.845 "mutual_chap": false, 00:13:52.845 "chap_group": 0, 00:13:52.845 "max_large_datain_per_connection": 64, 00:13:52.845 "max_r2t_per_connection": 4, 00:13:52.845 "pdu_pool_size": 36864, 00:13:52.845 "immediate_data_pool_size": 16384, 00:13:52.845 "data_out_pool_size": 2048 00:13:52.845 } 00:13:52.845 } 00:13:52.845 ] 00:13:52.845 } 00:13:52.845 ] 00:13:52.845 }' 00:13:52.845 06:47:45 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:52.845 [2024-11-18 06:47:45.746478] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:13:52.845 [2024-11-18 06:47:45.746628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82390 ] 00:13:52.845 [2024-11-18 06:47:45.908527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.107 [2024-11-18 06:47:45.938841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.368 [2024-11-18 06:47:46.296996] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:53.368 [2024-11-18 06:47:46.297321] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:53.368 [2024-11-18 06:47:46.305119] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:53.368 [2024-11-18 06:47:46.305211] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:53.368 [2024-11-18 06:47:46.305219] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:53.368 [2024-11-18 06:47:46.305228] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:53.368 [2024-11-18 06:47:46.314091] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:53.368 [2024-11-18 06:47:46.314131] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:53.368 [2024-11-18 06:47:46.321011] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:53.368 [2024-11-18 06:47:46.321113] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:53.368 [2024-11-18 06:47:46.338010] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:53.630 06:47:46 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:53.630 06:47:46 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:13:53.630 06:47:46 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:53.630 06:47:46 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:53.630 06:47:46 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.630 06:47:46 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:53.630 06:47:46 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.630 06:47:46 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:53.630 06:47:46 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:53.630 06:47:46 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 82390 00:13:53.630 06:47:46 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 82390 ']' 00:13:53.630 06:47:46 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 82390 00:13:53.630 06:47:46 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:13:53.630 06:47:46 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:53.630 06:47:46 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82390 00:13:53.630 killing process with pid 82390 00:13:53.630 06:47:46 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:53.630 06:47:46 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:53.630 06:47:46 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82390' 00:13:53.630 06:47:46 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 82390 00:13:53.630 06:47:46 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 82390 00:13:53.892 [2024-11-18 06:47:46.959537] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:54.152 [2024-11-18 06:47:47.002119] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:54.153 [2024-11-18 06:47:47.002267] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:54.153 [2024-11-18 06:47:47.011039] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:54.153 [2024-11-18 06:47:47.011115] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:54.153 [2024-11-18 06:47:47.011125] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:54.153 [2024-11-18 06:47:47.011160] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:54.153 [2024-11-18 06:47:47.011316] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:54.414 06:47:47 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:54.414 00:13:54.414 real 0m3.899s 00:13:54.414 user 0m2.693s 00:13:54.414 sys 0m1.863s 00:13:54.414 ************************************ 00:13:54.414 END TEST test_save_ublk_config 00:13:54.414 ************************************ 00:13:54.414 06:47:47 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:54.414 06:47:47 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:54.674 06:47:47 ublk -- ublk/ublk.sh@139 -- # spdk_pid=82440 00:13:54.674 06:47:47 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:54.674 06:47:47 ublk -- ublk/ublk.sh@141 -- # waitforlisten 82440 00:13:54.674 06:47:47 ublk -- common/autotest_common.sh@835 -- # '[' -z 82440 ']' 00:13:54.674 06:47:47 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:54.674 06:47:47 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.674 06:47:47 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:54.674 06:47:47 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.674 06:47:47 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:54.674 06:47:47 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:54.674 [2024-11-18 06:47:47.604751] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:13:54.674 [2024-11-18 06:47:47.604906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82440 ] 00:13:54.934 [2024-11-18 06:47:47.765484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:54.934 [2024-11-18 06:47:47.795893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.934 [2024-11-18 06:47:47.795951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.507 06:47:48 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:55.507 06:47:48 ublk -- common/autotest_common.sh@868 -- # return 0 00:13:55.507 06:47:48 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:55.507 06:47:48 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:55.507 06:47:48 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:55.507 06:47:48 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.507 ************************************ 00:13:55.507 START TEST test_create_ublk 00:13:55.507 ************************************ 00:13:55.507 06:47:48 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:13:55.507 06:47:48 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:55.507 06:47:48 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.507 06:47:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.507 [2024-11-18 06:47:48.481001] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:55.508 [2024-11-18 06:47:48.483068] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:55.508 06:47:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.508 06:47:48 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:13:55.508 06:47:48 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:55.508 06:47:48 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.508 06:47:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.508 06:47:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.508 06:47:48 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:55.508 06:47:48 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:55.508 06:47:48 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.508 06:47:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.508 [2024-11-18 06:47:48.561171] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:55.508 [2024-11-18 06:47:48.561626] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:55.508 [2024-11-18 06:47:48.561644] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:55.508 [2024-11-18 06:47:48.561655] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:55.508 [2024-11-18 06:47:48.570300] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:55.508 [2024-11-18 06:47:48.570357] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:55.508 [2024-11-18 06:47:48.577038] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:55.508 [2024-11-18 06:47:48.577783] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:55.769 [2024-11-18 06:47:48.607012] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:55.769 06:47:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.769 06:47:48 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:55.769 06:47:48 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:55.769 06:47:48 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:55.769 06:47:48 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.769 06:47:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.769 06:47:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.769 06:47:48 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:55.769 { 00:13:55.769 "ublk_device": "/dev/ublkb0", 00:13:55.769 "id": 0, 00:13:55.769 "queue_depth": 512, 00:13:55.769 "num_queues": 4, 00:13:55.769 "bdev_name": "Malloc0" 00:13:55.769 } 00:13:55.769 ]' 00:13:55.769 06:47:48 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:55.769 06:47:48 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:55.769 06:47:48 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:55.769 06:47:48 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:55.769 06:47:48 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:55.769 06:47:48 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:55.769 06:47:48 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:55.769 06:47:48 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:55.769 06:47:48 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:55.769 06:47:48 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:55.769 06:47:48 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:55.769 06:47:48 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:55.769 06:47:48 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:13:55.769 06:47:48 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:13:55.769 06:47:48 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:13:55.769 06:47:48 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:55.769 06:47:48 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:55.769 06:47:48 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:55.769 06:47:48 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:55.769 06:47:48 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:55.769 06:47:48 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:55.769 06:47:48 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:56.030 fio: verification read phase will never start because write phase uses all of runtime 00:13:56.030 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:56.030 fio-3.35 00:13:56.030 Starting 1 process 00:14:06.115 00:14:06.115 fio_test: (groupid=0, jobs=1): err= 0: pid=82485: Mon Nov 18 06:47:59 2024 00:14:06.115 write: IOPS=16.5k, BW=64.4MiB/s (67.5MB/s)(644MiB/10001msec); 0 zone resets 00:14:06.115 clat (usec): min=34, max=4366, avg=59.85, stdev=116.03 00:14:06.115 lat (usec): min=35, max=4367, avg=60.30, stdev=116.05 00:14:06.115 clat percentiles (usec): 00:14:06.115 | 1.00th=[ 41], 5.00th=[ 43], 10.00th=[ 45], 20.00th=[ 47], 00:14:06.115 | 30.00th=[ 49], 40.00th=[ 50], 50.00th=[ 53], 60.00th=[ 58], 00:14:06.115 | 70.00th=[ 61], 80.00th=[ 64], 90.00th=[ 68], 95.00th=[ 72], 00:14:06.115 | 99.00th=[ 86], 99.50th=[ 97], 99.90th=[ 2540], 99.95th=[ 3392], 00:14:06.115 | 99.99th=[ 3949] 00:14:06.115 bw ( KiB/s): min=26888, max=78040, per=100.00%, avg=66439.58, stdev=12541.80, samples=19 00:14:06.115 iops : min= 6722, max=19510, avg=16609.89, stdev=3135.45, samples=19 00:14:06.115 lat (usec) : 50=39.40%, 100=60.14%, 250=0.24%, 500=0.03%, 750=0.01% 00:14:06.115 lat (usec) : 1000=0.01% 00:14:06.115 lat (msec) : 2=0.05%, 4=0.11%, 10=0.01% 00:14:06.115 cpu : usr=3.00%, sys=13.58%, ctx=164926, majf=0, minf=795 00:14:06.115 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.115 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.115 issued rwts: total=0,164925,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.115 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:06.115 00:14:06.115 Run status group 0 (all jobs): 00:14:06.115 WRITE: bw=64.4MiB/s (67.5MB/s), 64.4MiB/s-64.4MiB/s (67.5MB/s-67.5MB/s), io=644MiB (676MB), run=10001-10001msec 00:14:06.115 00:14:06.115 Disk stats (read/write): 00:14:06.115 ublkb0: ios=0/163447, merge=0/0, ticks=0/8336, in_queue=8337, util=99.05% 00:14:06.115 06:47:59 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:06.115 06:47:59 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.115 06:47:59 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.115 [2024-11-18 06:47:59.019272] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:06.115 [2024-11-18 06:47:59.066026] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:06.115 [2024-11-18 06:47:59.066797] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:06.115 [2024-11-18 06:47:59.074006] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:06.115 [2024-11-18 06:47:59.074272] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:06.115 [2024-11-18 06:47:59.074303] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:06.115 06:47:59 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.115 06:47:59 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:06.115 06:47:59 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:14:06.115 06:47:59 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:06.115 06:47:59 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:06.115 06:47:59 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:06.115 06:47:59 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:06.115 06:47:59 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:06.115 06:47:59 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:14:06.115 06:47:59 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.115 06:47:59 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.115 [2024-11-18 06:47:59.090076] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:06.115 request: 00:14:06.115 { 00:14:06.115 "ublk_id": 0, 00:14:06.115 "method": "ublk_stop_disk", 00:14:06.115 "req_id": 1 00:14:06.115 } 00:14:06.115 Got JSON-RPC error response 00:14:06.115 response: 00:14:06.115 { 00:14:06.115 "code": -19, 00:14:06.115 "message": "No such device" 00:14:06.115 } 00:14:06.115 06:47:59 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:06.115 06:47:59 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:14:06.115 06:47:59 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:06.115 06:47:59 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:06.115 06:47:59 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:06.115 06:47:59 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:06.115 06:47:59 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.115 06:47:59 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.115 [2024-11-18 06:47:59.106053] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:06.115 [2024-11-18 06:47:59.107332] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:06.115 [2024-11-18 06:47:59.107358] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:06.115 06:47:59 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.115 06:47:59 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:06.115 06:47:59 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.115 06:47:59 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.115 06:47:59 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.115 06:47:59 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:06.115 06:47:59 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:06.115 06:47:59 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.115 06:47:59 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.115 06:47:59 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.115 06:47:59 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:06.115 06:47:59 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:06.373 06:47:59 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:06.373 06:47:59 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:06.373 06:47:59 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.373 06:47:59 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.373 06:47:59 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.373 06:47:59 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:06.373 06:47:59 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:06.373 ************************************ 00:14:06.373 END TEST test_create_ublk 00:14:06.373 ************************************ 00:14:06.373 06:47:59 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:06.373 00:14:06.373 real 0m10.790s 00:14:06.373 user 0m0.592s 00:14:06.373 sys 0m1.437s 00:14:06.373 06:47:59 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:06.373 06:47:59 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.373 06:47:59 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:06.373 06:47:59 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:06.373 06:47:59 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:06.373 06:47:59 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.373 ************************************ 00:14:06.373 START TEST test_create_multi_ublk 00:14:06.373 ************************************ 00:14:06.373 06:47:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:14:06.373 06:47:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:06.373 06:47:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.373 06:47:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.373 [2024-11-18 06:47:59.309998] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:06.373 [2024-11-18 06:47:59.310956] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:06.373 06:47:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.373 06:47:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:06.373 06:47:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:06.373 06:47:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:06.373 06:47:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:06.373 06:47:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.373 06:47:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.373 06:47:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.373 06:47:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:06.373 06:47:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:06.373 06:47:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.373 06:47:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.373 [2024-11-18 06:47:59.382096] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:06.373 [2024-11-18 06:47:59.382392] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:06.373 [2024-11-18 06:47:59.382405] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:06.373 [2024-11-18 06:47:59.382410] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:06.373 [2024-11-18 06:47:59.406016] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:06.373 [2024-11-18 06:47:59.406035] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:06.373 [2024-11-18 06:47:59.418003] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:06.373 [2024-11-18 06:47:59.418477] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:06.631 [2024-11-18 06:47:59.458001] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:06.631 06:47:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.631 06:47:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:06.631 06:47:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:06.631 06:47:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:06.631 06:47:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.631 06:47:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.631 06:47:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.631 06:47:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:06.631 06:47:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:06.631 06:47:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.631 06:47:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.631 [2024-11-18 06:47:59.542095] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:06.631 [2024-11-18 06:47:59.542387] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:06.631 [2024-11-18 06:47:59.542399] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:06.631 [2024-11-18 06:47:59.542405] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:06.631 [2024-11-18 06:47:59.554009] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:06.631 [2024-11-18 06:47:59.554030] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:06.631 [2024-11-18 06:47:59.566002] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:06.631 [2024-11-18 06:47:59.566480] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:06.631 [2024-11-18 06:47:59.601999] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:06.631 06:47:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.631 06:47:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:06.631 06:47:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:06.631 06:47:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:06.631 06:47:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.631 06:47:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.631 06:47:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.631 06:47:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:06.631 06:47:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:06.631 06:47:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.631 06:47:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.631 [2024-11-18 06:47:59.686090] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:06.631 [2024-11-18 06:47:59.686385] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:06.631 [2024-11-18 06:47:59.686398] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:06.631 [2024-11-18 06:47:59.686402] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:06.631 [2024-11-18 06:47:59.698017] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:06.631 [2024-11-18 06:47:59.698033] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:06.631 [2024-11-18 06:47:59.710008] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:06.631 [2024-11-18 06:47:59.710486] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:06.890 [2024-11-18 06:47:59.746006] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:06.890 06:47:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.890 06:47:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:06.890 06:47:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:06.890 06:47:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:06.890 06:47:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.890 06:47:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.890 06:47:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.890 06:47:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:06.890 06:47:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:06.890 06:47:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.890 06:47:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.890 [2024-11-18 06:47:59.830095] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:06.890 [2024-11-18 06:47:59.830391] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:06.890 [2024-11-18 06:47:59.830403] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:06.890 [2024-11-18 06:47:59.830409] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:06.890 [2024-11-18 06:47:59.842015] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:06.890 [2024-11-18 06:47:59.842037] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:06.890 [2024-11-18 06:47:59.854004] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:06.890 [2024-11-18 06:47:59.854493] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:06.890 [2024-11-18 06:47:59.875010] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:06.890 06:47:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.890 06:47:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:06.890 06:47:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:06.890 06:47:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.890 06:47:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.890 06:47:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.890 06:47:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:06.890 { 00:14:06.890 "ublk_device": "/dev/ublkb0", 00:14:06.890 "id": 0, 00:14:06.890 "queue_depth": 512, 00:14:06.890 "num_queues": 4, 00:14:06.890 "bdev_name": "Malloc0" 00:14:06.890 }, 00:14:06.890 { 00:14:06.890 "ublk_device": "/dev/ublkb1", 00:14:06.890 "id": 1, 00:14:06.890 "queue_depth": 512, 00:14:06.890 "num_queues": 4, 00:14:06.890 "bdev_name": "Malloc1" 00:14:06.890 }, 00:14:06.890 { 00:14:06.890 "ublk_device": "/dev/ublkb2", 00:14:06.890 "id": 2, 00:14:06.890 "queue_depth": 512, 00:14:06.890 "num_queues": 4, 00:14:06.890 "bdev_name": "Malloc2" 00:14:06.890 }, 00:14:06.890 { 00:14:06.890 "ublk_device": "/dev/ublkb3", 00:14:06.890 "id": 3, 00:14:06.890 "queue_depth": 512, 00:14:06.890 "num_queues": 4, 00:14:06.890 "bdev_name": "Malloc3" 00:14:06.890 } 00:14:06.890 ]' 00:14:06.890 06:47:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:06.890 06:47:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:06.890 06:47:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:06.890 06:47:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:06.890 06:47:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:07.149 06:47:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:07.149 06:47:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:07.149 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:07.149 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:07.149 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:07.149 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:07.149 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:07.149 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:07.149 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:07.149 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:07.149 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:07.149 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:07.149 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:07.149 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:07.149 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:07.149 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:07.149 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:07.407 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:07.407 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:07.407 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:07.407 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:07.407 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:07.407 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:07.407 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:07.407 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:07.407 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:07.407 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:07.407 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:07.407 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:07.407 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:07.407 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:07.407 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:07.407 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:07.407 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:07.407 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:07.407 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:07.407 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:07.666 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:07.666 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:07.666 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:07.666 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:07.666 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:07.666 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:07.666 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:07.666 06:48:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.666 06:48:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.666 [2024-11-18 06:48:00.547091] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:07.666 [2024-11-18 06:48:00.587038] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:07.666 [2024-11-18 06:48:00.587939] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:07.666 [2024-11-18 06:48:00.595010] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:07.666 [2024-11-18 06:48:00.595266] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:07.666 [2024-11-18 06:48:00.595279] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:07.666 06:48:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.666 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:07.666 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:07.666 06:48:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.666 06:48:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.666 [2024-11-18 06:48:00.611060] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:07.666 [2024-11-18 06:48:00.651033] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:07.666 [2024-11-18 06:48:00.651858] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:07.666 [2024-11-18 06:48:00.660033] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:07.666 [2024-11-18 06:48:00.660258] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:07.666 [2024-11-18 06:48:00.660264] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:07.666 06:48:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.666 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:07.666 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:07.666 06:48:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.666 06:48:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.666 [2024-11-18 06:48:00.672061] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:07.666 [2024-11-18 06:48:00.719041] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:07.666 [2024-11-18 06:48:00.719816] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:07.666 [2024-11-18 06:48:00.727000] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:07.666 [2024-11-18 06:48:00.727255] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:07.666 [2024-11-18 06:48:00.727265] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:07.666 06:48:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.666 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:07.666 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:07.666 06:48:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.666 06:48:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.666 [2024-11-18 06:48:00.743054] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:07.925 [2024-11-18 06:48:00.777028] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:07.925 [2024-11-18 06:48:00.777692] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:07.925 [2024-11-18 06:48:00.785994] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:07.925 [2024-11-18 06:48:00.786254] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:07.925 [2024-11-18 06:48:00.786265] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:07.925 06:48:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.925 06:48:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:07.925 [2024-11-18 06:48:00.978063] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:07.925 [2024-11-18 06:48:00.979293] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:07.925 [2024-11-18 06:48:00.979325] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:07.925 06:48:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:07.925 06:48:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:07.925 06:48:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:07.925 06:48:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.925 06:48:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.183 06:48:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.183 06:48:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:08.183 06:48:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:08.183 06:48:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.183 06:48:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.183 06:48:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.183 06:48:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:08.183 06:48:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:08.183 06:48:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.183 06:48:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.183 06:48:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.183 06:48:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:08.183 06:48:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:08.183 06:48:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.183 06:48:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.183 06:48:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.183 06:48:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:08.183 06:48:01 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:08.183 06:48:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.183 06:48:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.183 06:48:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.183 06:48:01 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:08.183 06:48:01 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:08.442 06:48:01 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:08.442 06:48:01 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:08.442 06:48:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.442 06:48:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.442 06:48:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.442 06:48:01 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:08.442 06:48:01 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:08.442 ************************************ 00:14:08.442 END TEST test_create_multi_ublk 00:14:08.442 ************************************ 00:14:08.442 06:48:01 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:08.442 00:14:08.442 real 0m2.021s 00:14:08.442 user 0m0.823s 00:14:08.442 sys 0m0.131s 00:14:08.442 06:48:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:08.442 06:48:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.442 06:48:01 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:08.442 06:48:01 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:08.442 06:48:01 ublk -- ublk/ublk.sh@130 -- # killprocess 82440 00:14:08.442 06:48:01 ublk -- common/autotest_common.sh@954 -- # '[' -z 82440 ']' 00:14:08.442 06:48:01 ublk -- common/autotest_common.sh@958 -- # kill -0 82440 00:14:08.442 06:48:01 ublk -- common/autotest_common.sh@959 -- # uname 00:14:08.442 06:48:01 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:08.442 06:48:01 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82440 00:14:08.442 killing process with pid 82440 00:14:08.442 06:48:01 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:08.442 06:48:01 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:08.442 06:48:01 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82440' 00:14:08.442 06:48:01 ublk -- common/autotest_common.sh@973 -- # kill 82440 00:14:08.442 06:48:01 ublk -- common/autotest_common.sh@978 -- # wait 82440 00:14:08.702 [2024-11-18 06:48:01.534374] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:08.702 [2024-11-18 06:48:01.534440] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:08.962 00:14:08.962 real 0m18.413s 00:14:08.962 user 0m28.464s 00:14:08.962 sys 0m7.662s 00:14:08.962 06:48:01 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:08.962 ************************************ 00:14:08.962 END TEST ublk 00:14:08.962 ************************************ 00:14:08.962 06:48:01 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.962 06:48:01 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:08.962 06:48:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:08.962 06:48:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:08.962 06:48:01 -- common/autotest_common.sh@10 -- # set +x 00:14:08.962 ************************************ 00:14:08.962 START TEST ublk_recovery 00:14:08.962 ************************************ 00:14:08.962 06:48:01 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:08.962 * Looking for test storage... 00:14:08.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:08.962 06:48:01 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:08.962 06:48:01 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:08.962 06:48:01 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:14:08.962 06:48:01 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:08.962 06:48:01 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:08.962 06:48:01 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:08.962 06:48:01 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:08.962 06:48:01 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:08.962 06:48:01 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:08.962 06:48:01 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:08.962 06:48:01 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:08.962 06:48:01 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:08.962 06:48:01 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:08.962 06:48:01 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:08.962 06:48:01 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:08.962 06:48:01 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:08.962 06:48:01 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:08.962 06:48:01 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:08.962 06:48:01 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:08.962 06:48:01 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:08.962 06:48:01 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:08.962 06:48:01 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:08.962 06:48:01 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:08.962 06:48:01 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:08.962 06:48:01 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:08.962 06:48:01 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:08.962 06:48:01 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:08.962 06:48:01 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:08.962 06:48:01 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:08.962 06:48:01 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:08.962 06:48:01 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:08.962 06:48:01 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:08.962 06:48:01 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:08.962 06:48:01 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:08.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.962 --rc genhtml_branch_coverage=1 00:14:08.962 --rc genhtml_function_coverage=1 00:14:08.962 --rc genhtml_legend=1 00:14:08.962 --rc geninfo_all_blocks=1 00:14:08.962 --rc geninfo_unexecuted_blocks=1 00:14:08.962 00:14:08.962 ' 00:14:08.962 06:48:01 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:08.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.962 --rc genhtml_branch_coverage=1 00:14:08.962 --rc genhtml_function_coverage=1 00:14:08.962 --rc genhtml_legend=1 00:14:08.962 --rc geninfo_all_blocks=1 00:14:08.962 --rc geninfo_unexecuted_blocks=1 00:14:08.962 00:14:08.962 ' 00:14:08.962 06:48:01 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:08.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.962 --rc genhtml_branch_coverage=1 00:14:08.962 --rc genhtml_function_coverage=1 00:14:08.962 --rc genhtml_legend=1 00:14:08.962 --rc geninfo_all_blocks=1 00:14:08.962 --rc geninfo_unexecuted_blocks=1 00:14:08.962 00:14:08.962 ' 00:14:08.962 06:48:01 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:08.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.962 --rc genhtml_branch_coverage=1 00:14:08.962 --rc genhtml_function_coverage=1 00:14:08.962 --rc genhtml_legend=1 00:14:08.962 --rc geninfo_all_blocks=1 00:14:08.962 --rc geninfo_unexecuted_blocks=1 00:14:08.962 00:14:08.962 ' 00:14:08.962 06:48:01 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:08.962 06:48:01 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:08.963 06:48:01 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:08.963 06:48:01 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:08.963 06:48:01 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:08.963 06:48:01 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:08.963 06:48:01 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:08.963 06:48:01 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:08.963 06:48:01 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:08.963 06:48:01 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:08.963 06:48:01 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=82807 00:14:08.963 06:48:01 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:08.963 06:48:01 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 82807 00:14:08.963 06:48:01 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:08.963 06:48:01 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 82807 ']' 00:14:08.963 06:48:01 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.963 06:48:01 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:08.963 06:48:01 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.963 06:48:01 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:08.963 06:48:01 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:09.222 [2024-11-18 06:48:02.065658] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:14:09.222 [2024-11-18 06:48:02.066310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82807 ] 00:14:09.222 [2024-11-18 06:48:02.224528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:09.222 [2024-11-18 06:48:02.248841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.222 [2024-11-18 06:48:02.248922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.157 06:48:02 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:10.157 06:48:02 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:14:10.157 06:48:02 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:10.157 06:48:02 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.157 06:48:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:10.157 [2024-11-18 06:48:02.904993] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:10.157 [2024-11-18 06:48:02.905956] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:10.157 06:48:02 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.157 06:48:02 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:10.157 06:48:02 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.157 06:48:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:10.157 malloc0 00:14:10.157 06:48:02 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.157 06:48:02 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:10.157 06:48:02 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.157 06:48:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:10.157 [2024-11-18 06:48:02.937086] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:10.157 [2024-11-18 06:48:02.937175] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:10.157 [2024-11-18 06:48:02.937182] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:10.157 [2024-11-18 06:48:02.937188] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:10.157 [2024-11-18 06:48:02.946080] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:10.157 [2024-11-18 06:48:02.946099] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:10.157 [2024-11-18 06:48:02.953004] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:10.157 [2024-11-18 06:48:02.953118] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:10.157 [2024-11-18 06:48:02.964013] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:10.157 1 00:14:10.157 06:48:02 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.157 06:48:02 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:11.093 06:48:03 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=82834 00:14:11.093 06:48:03 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:11.093 06:48:03 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:11.093 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:11.093 fio-3.35 00:14:11.093 Starting 1 process 00:14:16.360 06:48:08 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 82807 00:14:16.360 06:48:08 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:21.645 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 82807 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:21.645 06:48:13 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=82945 00:14:21.645 06:48:13 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:21.645 06:48:13 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 82945 00:14:21.645 06:48:13 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 82945 ']' 00:14:21.645 06:48:13 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:21.645 06:48:13 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.645 06:48:13 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:21.645 06:48:13 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.645 06:48:13 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:21.645 06:48:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.645 [2024-11-18 06:48:14.072360] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:14:21.645 [2024-11-18 06:48:14.072482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82945 ] 00:14:21.645 [2024-11-18 06:48:14.229672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:21.645 [2024-11-18 06:48:14.249832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.645 [2024-11-18 06:48:14.249886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.904 06:48:14 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:21.904 06:48:14 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:14:21.904 06:48:14 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:21.904 06:48:14 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.904 06:48:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.904 [2024-11-18 06:48:14.861999] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:21.904 [2024-11-18 06:48:14.863080] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:21.904 06:48:14 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.904 06:48:14 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:21.904 06:48:14 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.904 06:48:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.904 malloc0 00:14:21.904 06:48:14 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.904 06:48:14 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:21.904 06:48:14 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.904 06:48:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.904 [2024-11-18 06:48:14.894425] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:21.904 [2024-11-18 06:48:14.894468] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:21.904 [2024-11-18 06:48:14.894476] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:21.904 [2024-11-18 06:48:14.902044] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:21.904 [2024-11-18 06:48:14.902064] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:14:21.904 [2024-11-18 06:48:14.902077] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:21.904 [2024-11-18 06:48:14.902152] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:21.904 1 00:14:21.904 06:48:14 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.904 06:48:14 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 82834 00:14:21.904 [2024-11-18 06:48:14.910010] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:21.904 [2024-11-18 06:48:14.912882] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:21.904 [2024-11-18 06:48:14.917219] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:21.904 [2024-11-18 06:48:14.917237] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:18.122 00:15:18.122 fio_test: (groupid=0, jobs=1): err= 0: pid=82843: Mon Nov 18 06:49:04 2024 00:15:18.122 read: IOPS=27.7k, BW=108MiB/s (113MB/s)(6494MiB/60002msec) 00:15:18.122 slat (nsec): min=971, max=135069, avg=4890.19, stdev=1314.34 00:15:18.122 clat (usec): min=593, max=5949.0k, avg=2270.91, stdev=37180.29 00:15:18.122 lat (usec): min=597, max=5949.0k, avg=2275.80, stdev=37180.29 00:15:18.122 clat percentiles (usec): 00:15:18.122 | 1.00th=[ 1745], 5.00th=[ 1844], 10.00th=[ 1860], 20.00th=[ 1893], 00:15:18.122 | 30.00th=[ 1909], 40.00th=[ 1909], 50.00th=[ 1926], 60.00th=[ 1942], 00:15:18.122 | 70.00th=[ 1958], 80.00th=[ 1975], 90.00th=[ 2008], 95.00th=[ 2835], 00:15:18.122 | 99.00th=[ 4817], 99.50th=[ 5276], 99.90th=[ 6587], 99.95th=[ 7570], 00:15:18.122 | 99.99th=[13042] 00:15:18.122 bw ( KiB/s): min=25456, max=126312, per=100.00%, avg=122080.74, stdev=13094.90, samples=108 00:15:18.122 iops : min= 6364, max=31578, avg=30520.19, stdev=3273.73, samples=108 00:15:18.122 write: IOPS=27.7k, BW=108MiB/s (113MB/s)(6489MiB/60002msec); 0 zone resets 00:15:18.122 slat (nsec): min=1006, max=109778, avg=4915.89, stdev=1323.06 00:15:18.122 clat (usec): min=613, max=5949.1k, avg=2339.91, stdev=36618.43 00:15:18.122 lat (usec): min=618, max=5949.1k, avg=2344.83, stdev=36618.43 00:15:18.122 clat percentiles (usec): 00:15:18.122 | 1.00th=[ 1778], 5.00th=[ 1926], 10.00th=[ 1942], 20.00th=[ 1975], 00:15:18.122 | 30.00th=[ 1991], 40.00th=[ 2008], 50.00th=[ 2024], 60.00th=[ 2040], 00:15:18.122 | 70.00th=[ 2040], 80.00th=[ 2073], 90.00th=[ 2114], 95.00th=[ 2704], 00:15:18.122 | 99.00th=[ 4752], 99.50th=[ 5342], 99.90th=[ 6652], 99.95th=[ 7439], 00:15:18.122 | 99.99th=[13042] 00:15:18.122 bw ( KiB/s): min=25160, max=126560, per=100.00%, avg=121973.04, stdev=13157.61, samples=108 00:15:18.122 iops : min= 6290, max=31640, avg=30493.26, stdev=3289.40, samples=108 00:15:18.122 lat (usec) : 750=0.01%, 1000=0.01% 00:15:18.122 lat (msec) : 2=61.90%, 4=36.00%, 10=2.08%, 20=0.01%, >=2000=0.01% 00:15:18.122 cpu : usr=6.33%, sys=27.94%, ctx=109810, majf=0, minf=15 00:15:18.122 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:18.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.122 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:18.122 issued rwts: total=1662453,1661157,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.122 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:18.122 00:15:18.122 Run status group 0 (all jobs): 00:15:18.122 READ: bw=108MiB/s (113MB/s), 108MiB/s-108MiB/s (113MB/s-113MB/s), io=6494MiB (6809MB), run=60002-60002msec 00:15:18.122 WRITE: bw=108MiB/s (113MB/s), 108MiB/s-108MiB/s (113MB/s-113MB/s), io=6489MiB (6804MB), run=60002-60002msec 00:15:18.122 00:15:18.122 Disk stats (read/write): 00:15:18.122 ublkb1: ios=1659136/1657764, merge=0/0, ticks=3681936/3658397, in_queue=7340334, util=99.88% 00:15:18.122 06:49:04 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:18.122 06:49:04 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.122 06:49:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.122 [2024-11-18 06:49:04.227843] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:18.122 [2024-11-18 06:49:04.263098] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:18.122 [2024-11-18 06:49:04.263247] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:18.122 [2024-11-18 06:49:04.272000] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:18.122 [2024-11-18 06:49:04.272119] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:18.122 [2024-11-18 06:49:04.272126] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:18.122 06:49:04 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.122 06:49:04 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:18.122 06:49:04 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.122 06:49:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.122 [2024-11-18 06:49:04.287071] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:18.122 [2024-11-18 06:49:04.288355] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:18.122 [2024-11-18 06:49:04.288387] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:18.122 06:49:04 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.122 06:49:04 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:18.122 06:49:04 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:18.122 06:49:04 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 82945 00:15:18.122 06:49:04 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 82945 ']' 00:15:18.122 06:49:04 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 82945 00:15:18.122 06:49:04 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:15:18.122 06:49:04 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:18.122 06:49:04 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82945 00:15:18.122 killing process with pid 82945 00:15:18.122 06:49:04 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:18.122 06:49:04 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:18.122 06:49:04 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82945' 00:15:18.122 06:49:04 ublk_recovery -- common/autotest_common.sh@973 -- # kill 82945 00:15:18.122 06:49:04 ublk_recovery -- common/autotest_common.sh@978 -- # wait 82945 00:15:18.122 [2024-11-18 06:49:04.488079] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:18.122 [2024-11-18 06:49:04.488138] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:18.122 00:15:18.122 real 1m2.910s 00:15:18.122 user 1m40.833s 00:15:18.122 sys 0m34.908s 00:15:18.122 06:49:04 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.122 06:49:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.122 ************************************ 00:15:18.122 END TEST ublk_recovery 00:15:18.122 ************************************ 00:15:18.122 06:49:04 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:15:18.122 06:49:04 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:15:18.122 06:49:04 -- spdk/autotest.sh@260 -- # timing_exit lib 00:15:18.122 06:49:04 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:18.122 06:49:04 -- common/autotest_common.sh@10 -- # set +x 00:15:18.122 06:49:04 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:15:18.122 06:49:04 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:15:18.122 06:49:04 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:15:18.122 06:49:04 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:18.122 06:49:04 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:18.122 06:49:04 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:15:18.122 06:49:04 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:15:18.122 06:49:04 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:15:18.122 06:49:04 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:15:18.122 06:49:04 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:15:18.122 06:49:04 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:18.122 06:49:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:18.122 06:49:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:18.123 06:49:04 -- common/autotest_common.sh@10 -- # set +x 00:15:18.123 ************************************ 00:15:18.123 START TEST ftl 00:15:18.123 ************************************ 00:15:18.123 06:49:04 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:18.123 * Looking for test storage... 00:15:18.123 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:18.123 06:49:04 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:18.123 06:49:04 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:15:18.123 06:49:04 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:18.123 06:49:04 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:18.123 06:49:04 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:18.123 06:49:04 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:18.123 06:49:04 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:18.123 06:49:04 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:18.123 06:49:04 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:18.123 06:49:04 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:18.123 06:49:04 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:18.123 06:49:04 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:18.123 06:49:04 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:18.123 06:49:04 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:18.123 06:49:04 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:18.123 06:49:04 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:18.123 06:49:04 ftl -- scripts/common.sh@345 -- # : 1 00:15:18.123 06:49:04 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:18.123 06:49:04 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:18.123 06:49:04 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:18.123 06:49:04 ftl -- scripts/common.sh@353 -- # local d=1 00:15:18.123 06:49:04 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:18.123 06:49:04 ftl -- scripts/common.sh@355 -- # echo 1 00:15:18.123 06:49:04 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:18.123 06:49:04 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:18.123 06:49:05 ftl -- scripts/common.sh@353 -- # local d=2 00:15:18.123 06:49:05 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:18.123 06:49:05 ftl -- scripts/common.sh@355 -- # echo 2 00:15:18.123 06:49:05 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:18.123 06:49:05 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:18.123 06:49:05 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:18.123 06:49:05 ftl -- scripts/common.sh@368 -- # return 0 00:15:18.123 06:49:05 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:18.123 06:49:05 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:18.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.123 --rc genhtml_branch_coverage=1 00:15:18.123 --rc genhtml_function_coverage=1 00:15:18.123 --rc genhtml_legend=1 00:15:18.123 --rc geninfo_all_blocks=1 00:15:18.123 --rc geninfo_unexecuted_blocks=1 00:15:18.123 00:15:18.123 ' 00:15:18.123 06:49:05 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:18.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.123 --rc genhtml_branch_coverage=1 00:15:18.123 --rc genhtml_function_coverage=1 00:15:18.123 --rc genhtml_legend=1 00:15:18.123 --rc geninfo_all_blocks=1 00:15:18.123 --rc geninfo_unexecuted_blocks=1 00:15:18.123 00:15:18.123 ' 00:15:18.123 06:49:05 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:18.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.123 --rc genhtml_branch_coverage=1 00:15:18.123 --rc genhtml_function_coverage=1 00:15:18.123 --rc genhtml_legend=1 00:15:18.123 --rc geninfo_all_blocks=1 00:15:18.123 --rc geninfo_unexecuted_blocks=1 00:15:18.123 00:15:18.123 ' 00:15:18.123 06:49:05 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:18.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.123 --rc genhtml_branch_coverage=1 00:15:18.123 --rc genhtml_function_coverage=1 00:15:18.123 --rc genhtml_legend=1 00:15:18.123 --rc geninfo_all_blocks=1 00:15:18.123 --rc geninfo_unexecuted_blocks=1 00:15:18.123 00:15:18.123 ' 00:15:18.123 06:49:05 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:18.123 06:49:05 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:18.123 06:49:05 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:18.123 06:49:05 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:18.123 06:49:05 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:18.123 06:49:05 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:18.123 06:49:05 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:18.123 06:49:05 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:18.123 06:49:05 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:18.123 06:49:05 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:18.123 06:49:05 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:18.123 06:49:05 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:18.123 06:49:05 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:18.123 06:49:05 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:18.123 06:49:05 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:18.123 06:49:05 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:18.123 06:49:05 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:18.123 06:49:05 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:18.123 06:49:05 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:18.123 06:49:05 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:18.123 06:49:05 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:18.123 06:49:05 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:18.123 06:49:05 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:18.123 06:49:05 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:18.123 06:49:05 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:18.123 06:49:05 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:18.123 06:49:05 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:18.123 06:49:05 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:18.123 06:49:05 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:18.123 06:49:05 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:18.123 06:49:05 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:18.123 06:49:05 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:18.123 06:49:05 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:18.123 06:49:05 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:18.123 06:49:05 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:18.123 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:18.123 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:18.123 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:18.123 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:18.123 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:18.123 06:49:05 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=83744 00:15:18.123 06:49:05 ftl -- ftl/ftl.sh@38 -- # waitforlisten 83744 00:15:18.123 06:49:05 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:18.123 06:49:05 ftl -- common/autotest_common.sh@835 -- # '[' -z 83744 ']' 00:15:18.123 06:49:05 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.123 06:49:05 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:18.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.123 06:49:05 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.123 06:49:05 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:18.123 06:49:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:18.123 [2024-11-18 06:49:05.545532] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:15:18.123 [2024-11-18 06:49:05.545627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83744 ] 00:15:18.123 [2024-11-18 06:49:05.699877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.123 [2024-11-18 06:49:05.721139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.123 06:49:06 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:18.123 06:49:06 ftl -- common/autotest_common.sh@868 -- # return 0 00:15:18.123 06:49:06 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:18.123 06:49:06 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:18.123 06:49:06 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:18.123 06:49:06 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:18.123 06:49:07 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:18.123 06:49:07 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:18.123 06:49:07 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:18.123 06:49:07 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:18.123 06:49:07 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:18.123 06:49:07 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:18.123 06:49:07 ftl -- ftl/ftl.sh@50 -- # break 00:15:18.123 06:49:07 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:18.123 06:49:07 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:18.123 06:49:07 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:18.124 06:49:07 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:18.124 06:49:07 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:18.124 06:49:07 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:18.124 06:49:07 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:18.124 06:49:07 ftl -- ftl/ftl.sh@63 -- # break 00:15:18.124 06:49:07 ftl -- ftl/ftl.sh@66 -- # killprocess 83744 00:15:18.124 06:49:07 ftl -- common/autotest_common.sh@954 -- # '[' -z 83744 ']' 00:15:18.124 06:49:07 ftl -- common/autotest_common.sh@958 -- # kill -0 83744 00:15:18.124 06:49:07 ftl -- common/autotest_common.sh@959 -- # uname 00:15:18.124 06:49:07 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:18.124 06:49:07 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83744 00:15:18.124 06:49:07 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:18.124 killing process with pid 83744 00:15:18.124 06:49:07 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:18.124 06:49:07 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83744' 00:15:18.124 06:49:07 ftl -- common/autotest_common.sh@973 -- # kill 83744 00:15:18.124 06:49:07 ftl -- common/autotest_common.sh@978 -- # wait 83744 00:15:18.124 06:49:07 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:18.124 06:49:07 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:18.124 06:49:07 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:18.124 06:49:07 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:18.124 06:49:07 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:18.124 ************************************ 00:15:18.124 START TEST ftl_fio_basic 00:15:18.124 ************************************ 00:15:18.124 06:49:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:18.124 * Looking for test storage... 00:15:18.124 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:18.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.124 --rc genhtml_branch_coverage=1 00:15:18.124 --rc genhtml_function_coverage=1 00:15:18.124 --rc genhtml_legend=1 00:15:18.124 --rc geninfo_all_blocks=1 00:15:18.124 --rc geninfo_unexecuted_blocks=1 00:15:18.124 00:15:18.124 ' 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:18.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.124 --rc genhtml_branch_coverage=1 00:15:18.124 --rc genhtml_function_coverage=1 00:15:18.124 --rc genhtml_legend=1 00:15:18.124 --rc geninfo_all_blocks=1 00:15:18.124 --rc geninfo_unexecuted_blocks=1 00:15:18.124 00:15:18.124 ' 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:18.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.124 --rc genhtml_branch_coverage=1 00:15:18.124 --rc genhtml_function_coverage=1 00:15:18.124 --rc genhtml_legend=1 00:15:18.124 --rc geninfo_all_blocks=1 00:15:18.124 --rc geninfo_unexecuted_blocks=1 00:15:18.124 00:15:18.124 ' 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:18.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.124 --rc genhtml_branch_coverage=1 00:15:18.124 --rc genhtml_function_coverage=1 00:15:18.124 --rc genhtml_legend=1 00:15:18.124 --rc geninfo_all_blocks=1 00:15:18.124 --rc geninfo_unexecuted_blocks=1 00:15:18.124 00:15:18.124 ' 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:18.124 06:49:08 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:18.125 06:49:08 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:18.125 06:49:08 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:18.125 06:49:08 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:18.125 06:49:08 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:18.125 06:49:08 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:18.125 06:49:08 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:18.125 06:49:08 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=83854 00:15:18.125 06:49:08 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 83854 00:15:18.125 06:49:08 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 83854 ']' 00:15:18.125 06:49:08 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.125 06:49:08 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:18.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.125 06:49:08 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.125 06:49:08 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:18.125 06:49:08 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:18.125 06:49:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:18.125 [2024-11-18 06:49:08.212610] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:15:18.125 [2024-11-18 06:49:08.212940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83854 ] 00:15:18.125 [2024-11-18 06:49:08.365700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:18.125 [2024-11-18 06:49:08.387888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.125 [2024-11-18 06:49:08.388164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:18.125 [2024-11-18 06:49:08.388239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.125 06:49:09 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:18.125 06:49:09 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:15:18.125 06:49:09 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:18.125 06:49:09 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:18.125 06:49:09 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:18.125 06:49:09 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:18.125 06:49:09 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:18.125 06:49:09 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:18.125 06:49:09 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:18.125 06:49:09 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:18.125 06:49:09 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:18.125 06:49:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:15:18.125 06:49:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:18.125 06:49:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:18.125 06:49:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:18.125 06:49:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:18.125 06:49:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:18.125 { 00:15:18.125 "name": "nvme0n1", 00:15:18.125 "aliases": [ 00:15:18.125 "1a4c2beb-72d4-4aac-a9be-319baba1161c" 00:15:18.125 ], 00:15:18.125 "product_name": "NVMe disk", 00:15:18.125 "block_size": 4096, 00:15:18.125 "num_blocks": 1310720, 00:15:18.125 "uuid": "1a4c2beb-72d4-4aac-a9be-319baba1161c", 00:15:18.125 "numa_id": -1, 00:15:18.125 "assigned_rate_limits": { 00:15:18.125 "rw_ios_per_sec": 0, 00:15:18.125 "rw_mbytes_per_sec": 0, 00:15:18.125 "r_mbytes_per_sec": 0, 00:15:18.125 "w_mbytes_per_sec": 0 00:15:18.125 }, 00:15:18.125 "claimed": false, 00:15:18.125 "zoned": false, 00:15:18.125 "supported_io_types": { 00:15:18.125 "read": true, 00:15:18.125 "write": true, 00:15:18.125 "unmap": true, 00:15:18.125 "flush": true, 00:15:18.125 "reset": true, 00:15:18.125 "nvme_admin": true, 00:15:18.125 "nvme_io": true, 00:15:18.125 "nvme_io_md": false, 00:15:18.125 "write_zeroes": true, 00:15:18.125 "zcopy": false, 00:15:18.125 "get_zone_info": false, 00:15:18.125 "zone_management": false, 00:15:18.125 "zone_append": false, 00:15:18.125 "compare": true, 00:15:18.125 "compare_and_write": false, 00:15:18.125 "abort": true, 00:15:18.125 "seek_hole": false, 00:15:18.125 "seek_data": false, 00:15:18.125 "copy": true, 00:15:18.125 "nvme_iov_md": false 00:15:18.125 }, 00:15:18.125 "driver_specific": { 00:15:18.125 "nvme": [ 00:15:18.125 { 00:15:18.125 "pci_address": "0000:00:11.0", 00:15:18.125 "trid": { 00:15:18.125 "trtype": "PCIe", 00:15:18.125 "traddr": "0000:00:11.0" 00:15:18.125 }, 00:15:18.125 "ctrlr_data": { 00:15:18.125 "cntlid": 0, 00:15:18.125 "vendor_id": "0x1b36", 00:15:18.125 "model_number": "QEMU NVMe Ctrl", 00:15:18.125 "serial_number": "12341", 00:15:18.125 "firmware_revision": "8.0.0", 00:15:18.125 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:18.125 "oacs": { 00:15:18.125 "security": 0, 00:15:18.125 "format": 1, 00:15:18.125 "firmware": 0, 00:15:18.125 "ns_manage": 1 00:15:18.125 }, 00:15:18.125 "multi_ctrlr": false, 00:15:18.125 "ana_reporting": false 00:15:18.125 }, 00:15:18.125 "vs": { 00:15:18.125 "nvme_version": "1.4" 00:15:18.125 }, 00:15:18.125 "ns_data": { 00:15:18.125 "id": 1, 00:15:18.125 "can_share": false 00:15:18.125 } 00:15:18.125 } 00:15:18.125 ], 00:15:18.125 "mp_policy": "active_passive" 00:15:18.125 } 00:15:18.125 } 00:15:18.125 ]' 00:15:18.125 06:49:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:18.125 06:49:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:18.125 06:49:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:18.125 06:49:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:15:18.125 06:49:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:15:18.125 06:49:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:15:18.125 06:49:09 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:18.125 06:49:09 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:18.125 06:49:09 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:18.125 06:49:09 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:18.125 06:49:09 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:18.125 06:49:09 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:18.125 06:49:09 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:18.125 06:49:10 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=782fa865-e22d-44a2-bc02-1c355c4dea3b 00:15:18.125 06:49:10 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 782fa865-e22d-44a2-bc02-1c355c4dea3b 00:15:18.125 06:49:10 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=9453657c-7567-4914-878a-f906fa3d4f7d 00:15:18.125 06:49:10 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9453657c-7567-4914-878a-f906fa3d4f7d 00:15:18.125 06:49:10 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:18.125 06:49:10 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:18.125 06:49:10 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=9453657c-7567-4914-878a-f906fa3d4f7d 00:15:18.125 06:49:10 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:18.125 06:49:10 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 9453657c-7567-4914-878a-f906fa3d4f7d 00:15:18.125 06:49:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=9453657c-7567-4914-878a-f906fa3d4f7d 00:15:18.125 06:49:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:18.125 06:49:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:18.125 06:49:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:18.125 06:49:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9453657c-7567-4914-878a-f906fa3d4f7d 00:15:18.125 06:49:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:18.125 { 00:15:18.125 "name": "9453657c-7567-4914-878a-f906fa3d4f7d", 00:15:18.125 "aliases": [ 00:15:18.125 "lvs/nvme0n1p0" 00:15:18.125 ], 00:15:18.125 "product_name": "Logical Volume", 00:15:18.125 "block_size": 4096, 00:15:18.125 "num_blocks": 26476544, 00:15:18.125 "uuid": "9453657c-7567-4914-878a-f906fa3d4f7d", 00:15:18.125 "assigned_rate_limits": { 00:15:18.125 "rw_ios_per_sec": 0, 00:15:18.125 "rw_mbytes_per_sec": 0, 00:15:18.125 "r_mbytes_per_sec": 0, 00:15:18.125 "w_mbytes_per_sec": 0 00:15:18.125 }, 00:15:18.125 "claimed": false, 00:15:18.125 "zoned": false, 00:15:18.125 "supported_io_types": { 00:15:18.126 "read": true, 00:15:18.126 "write": true, 00:15:18.126 "unmap": true, 00:15:18.126 "flush": false, 00:15:18.126 "reset": true, 00:15:18.126 "nvme_admin": false, 00:15:18.126 "nvme_io": false, 00:15:18.126 "nvme_io_md": false, 00:15:18.126 "write_zeroes": true, 00:15:18.126 "zcopy": false, 00:15:18.126 "get_zone_info": false, 00:15:18.126 "zone_management": false, 00:15:18.126 "zone_append": false, 00:15:18.126 "compare": false, 00:15:18.126 "compare_and_write": false, 00:15:18.126 "abort": false, 00:15:18.126 "seek_hole": true, 00:15:18.126 "seek_data": true, 00:15:18.126 "copy": false, 00:15:18.126 "nvme_iov_md": false 00:15:18.126 }, 00:15:18.126 "driver_specific": { 00:15:18.126 "lvol": { 00:15:18.126 "lvol_store_uuid": "782fa865-e22d-44a2-bc02-1c355c4dea3b", 00:15:18.126 "base_bdev": "nvme0n1", 00:15:18.126 "thin_provision": true, 00:15:18.126 "num_allocated_clusters": 0, 00:15:18.126 "snapshot": false, 00:15:18.126 "clone": false, 00:15:18.126 "esnap_clone": false 00:15:18.126 } 00:15:18.126 } 00:15:18.126 } 00:15:18.126 ]' 00:15:18.126 06:49:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:18.126 06:49:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:18.126 06:49:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:18.126 06:49:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:18.126 06:49:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:18.126 06:49:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:15:18.126 06:49:10 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:18.126 06:49:10 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:18.126 06:49:10 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:18.126 06:49:10 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:18.126 06:49:10 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:18.126 06:49:10 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 9453657c-7567-4914-878a-f906fa3d4f7d 00:15:18.126 06:49:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=9453657c-7567-4914-878a-f906fa3d4f7d 00:15:18.126 06:49:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:18.126 06:49:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:18.126 06:49:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:18.126 06:49:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9453657c-7567-4914-878a-f906fa3d4f7d 00:15:18.126 06:49:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:18.126 { 00:15:18.126 "name": "9453657c-7567-4914-878a-f906fa3d4f7d", 00:15:18.126 "aliases": [ 00:15:18.126 "lvs/nvme0n1p0" 00:15:18.126 ], 00:15:18.126 "product_name": "Logical Volume", 00:15:18.126 "block_size": 4096, 00:15:18.126 "num_blocks": 26476544, 00:15:18.126 "uuid": "9453657c-7567-4914-878a-f906fa3d4f7d", 00:15:18.126 "assigned_rate_limits": { 00:15:18.126 "rw_ios_per_sec": 0, 00:15:18.126 "rw_mbytes_per_sec": 0, 00:15:18.126 "r_mbytes_per_sec": 0, 00:15:18.126 "w_mbytes_per_sec": 0 00:15:18.126 }, 00:15:18.126 "claimed": false, 00:15:18.126 "zoned": false, 00:15:18.126 "supported_io_types": { 00:15:18.126 "read": true, 00:15:18.126 "write": true, 00:15:18.126 "unmap": true, 00:15:18.126 "flush": false, 00:15:18.126 "reset": true, 00:15:18.126 "nvme_admin": false, 00:15:18.126 "nvme_io": false, 00:15:18.126 "nvme_io_md": false, 00:15:18.126 "write_zeroes": true, 00:15:18.126 "zcopy": false, 00:15:18.126 "get_zone_info": false, 00:15:18.126 "zone_management": false, 00:15:18.126 "zone_append": false, 00:15:18.126 "compare": false, 00:15:18.126 "compare_and_write": false, 00:15:18.126 "abort": false, 00:15:18.126 "seek_hole": true, 00:15:18.126 "seek_data": true, 00:15:18.126 "copy": false, 00:15:18.126 "nvme_iov_md": false 00:15:18.126 }, 00:15:18.126 "driver_specific": { 00:15:18.126 "lvol": { 00:15:18.126 "lvol_store_uuid": "782fa865-e22d-44a2-bc02-1c355c4dea3b", 00:15:18.126 "base_bdev": "nvme0n1", 00:15:18.126 "thin_provision": true, 00:15:18.126 "num_allocated_clusters": 0, 00:15:18.126 "snapshot": false, 00:15:18.126 "clone": false, 00:15:18.126 "esnap_clone": false 00:15:18.126 } 00:15:18.126 } 00:15:18.126 } 00:15:18.126 ]' 00:15:18.126 06:49:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:18.126 06:49:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:18.126 06:49:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:18.126 06:49:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:18.126 06:49:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:18.126 06:49:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:15:18.126 06:49:11 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:18.126 06:49:11 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:18.385 06:49:11 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:18.385 06:49:11 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:18.385 06:49:11 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:18.385 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:18.385 06:49:11 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 9453657c-7567-4914-878a-f906fa3d4f7d 00:15:18.385 06:49:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=9453657c-7567-4914-878a-f906fa3d4f7d 00:15:18.385 06:49:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:18.385 06:49:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:18.385 06:49:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:18.385 06:49:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9453657c-7567-4914-878a-f906fa3d4f7d 00:15:18.385 06:49:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:18.385 { 00:15:18.385 "name": "9453657c-7567-4914-878a-f906fa3d4f7d", 00:15:18.385 "aliases": [ 00:15:18.385 "lvs/nvme0n1p0" 00:15:18.385 ], 00:15:18.385 "product_name": "Logical Volume", 00:15:18.385 "block_size": 4096, 00:15:18.385 "num_blocks": 26476544, 00:15:18.385 "uuid": "9453657c-7567-4914-878a-f906fa3d4f7d", 00:15:18.385 "assigned_rate_limits": { 00:15:18.385 "rw_ios_per_sec": 0, 00:15:18.385 "rw_mbytes_per_sec": 0, 00:15:18.385 "r_mbytes_per_sec": 0, 00:15:18.385 "w_mbytes_per_sec": 0 00:15:18.385 }, 00:15:18.385 "claimed": false, 00:15:18.385 "zoned": false, 00:15:18.385 "supported_io_types": { 00:15:18.385 "read": true, 00:15:18.385 "write": true, 00:15:18.385 "unmap": true, 00:15:18.385 "flush": false, 00:15:18.385 "reset": true, 00:15:18.385 "nvme_admin": false, 00:15:18.385 "nvme_io": false, 00:15:18.385 "nvme_io_md": false, 00:15:18.385 "write_zeroes": true, 00:15:18.385 "zcopy": false, 00:15:18.385 "get_zone_info": false, 00:15:18.385 "zone_management": false, 00:15:18.385 "zone_append": false, 00:15:18.385 "compare": false, 00:15:18.385 "compare_and_write": false, 00:15:18.385 "abort": false, 00:15:18.385 "seek_hole": true, 00:15:18.385 "seek_data": true, 00:15:18.385 "copy": false, 00:15:18.385 "nvme_iov_md": false 00:15:18.385 }, 00:15:18.385 "driver_specific": { 00:15:18.385 "lvol": { 00:15:18.385 "lvol_store_uuid": "782fa865-e22d-44a2-bc02-1c355c4dea3b", 00:15:18.385 "base_bdev": "nvme0n1", 00:15:18.385 "thin_provision": true, 00:15:18.385 "num_allocated_clusters": 0, 00:15:18.385 "snapshot": false, 00:15:18.385 "clone": false, 00:15:18.385 "esnap_clone": false 00:15:18.385 } 00:15:18.385 } 00:15:18.385 } 00:15:18.385 ]' 00:15:18.385 06:49:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:18.645 06:49:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:18.645 06:49:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:18.645 06:49:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:18.645 06:49:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:18.645 06:49:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:15:18.645 06:49:11 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:18.645 06:49:11 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:18.645 06:49:11 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9453657c-7567-4914-878a-f906fa3d4f7d -c nvc0n1p0 --l2p_dram_limit 60 00:15:18.645 [2024-11-18 06:49:11.691440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.645 [2024-11-18 06:49:11.691557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:18.645 [2024-11-18 06:49:11.691573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:18.645 [2024-11-18 06:49:11.691581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.645 [2024-11-18 06:49:11.691649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.645 [2024-11-18 06:49:11.691659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:18.645 [2024-11-18 06:49:11.691665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:15:18.645 [2024-11-18 06:49:11.691674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.645 [2024-11-18 06:49:11.691708] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:18.645 [2024-11-18 06:49:11.691930] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:18.645 [2024-11-18 06:49:11.691942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.645 [2024-11-18 06:49:11.691951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:18.645 [2024-11-18 06:49:11.691968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:15:18.645 [2024-11-18 06:49:11.691991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.645 [2024-11-18 06:49:11.692087] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d29e7f86-a2ab-48e2-9481-dfe891c12b2b 00:15:18.645 [2024-11-18 06:49:11.693152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.645 [2024-11-18 06:49:11.693258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:18.645 [2024-11-18 06:49:11.693274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:15:18.645 [2024-11-18 06:49:11.693280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.645 [2024-11-18 06:49:11.698588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.645 [2024-11-18 06:49:11.698614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:18.645 [2024-11-18 06:49:11.698623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.236 ms 00:15:18.645 [2024-11-18 06:49:11.698630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.645 [2024-11-18 06:49:11.698720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.645 [2024-11-18 06:49:11.698727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:18.645 [2024-11-18 06:49:11.698736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:15:18.645 [2024-11-18 06:49:11.698742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.645 [2024-11-18 06:49:11.698796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.645 [2024-11-18 06:49:11.698804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:18.645 [2024-11-18 06:49:11.698820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:18.645 [2024-11-18 06:49:11.698826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.645 [2024-11-18 06:49:11.698853] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:18.645 [2024-11-18 06:49:11.700166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.645 [2024-11-18 06:49:11.700263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:18.645 [2024-11-18 06:49:11.700290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.319 ms 00:15:18.645 [2024-11-18 06:49:11.700298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.645 [2024-11-18 06:49:11.700336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.645 [2024-11-18 06:49:11.700344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:18.645 [2024-11-18 06:49:11.700358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:18.645 [2024-11-18 06:49:11.700367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.645 [2024-11-18 06:49:11.700394] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:18.646 [2024-11-18 06:49:11.700513] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:18.646 [2024-11-18 06:49:11.700522] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:18.646 [2024-11-18 06:49:11.700532] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:18.646 [2024-11-18 06:49:11.700540] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:18.646 [2024-11-18 06:49:11.700551] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:18.646 [2024-11-18 06:49:11.700557] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:18.646 [2024-11-18 06:49:11.700564] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:18.646 [2024-11-18 06:49:11.700570] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:18.646 [2024-11-18 06:49:11.700577] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:18.646 [2024-11-18 06:49:11.700583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.646 [2024-11-18 06:49:11.700590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:18.646 [2024-11-18 06:49:11.700599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:15:18.646 [2024-11-18 06:49:11.700606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.646 [2024-11-18 06:49:11.700677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.646 [2024-11-18 06:49:11.700686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:18.646 [2024-11-18 06:49:11.700694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:15:18.646 [2024-11-18 06:49:11.700701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.646 [2024-11-18 06:49:11.700804] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:18.646 [2024-11-18 06:49:11.700813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:18.646 [2024-11-18 06:49:11.700819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:18.646 [2024-11-18 06:49:11.700826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:18.646 [2024-11-18 06:49:11.700832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:18.646 [2024-11-18 06:49:11.700838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:18.646 [2024-11-18 06:49:11.700843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:18.646 [2024-11-18 06:49:11.700850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:18.646 [2024-11-18 06:49:11.700857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:18.646 [2024-11-18 06:49:11.700866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:18.646 [2024-11-18 06:49:11.700872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:18.646 [2024-11-18 06:49:11.700879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:18.646 [2024-11-18 06:49:11.700885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:18.646 [2024-11-18 06:49:11.700894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:18.646 [2024-11-18 06:49:11.700900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:18.646 [2024-11-18 06:49:11.700907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:18.646 [2024-11-18 06:49:11.700923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:18.646 [2024-11-18 06:49:11.700930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:18.646 [2024-11-18 06:49:11.700936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:18.646 [2024-11-18 06:49:11.700943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:18.646 [2024-11-18 06:49:11.700949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:18.646 [2024-11-18 06:49:11.700956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:18.646 [2024-11-18 06:49:11.700962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:18.646 [2024-11-18 06:49:11.700969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:18.646 [2024-11-18 06:49:11.700990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:18.646 [2024-11-18 06:49:11.700998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:18.646 [2024-11-18 06:49:11.701005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:18.646 [2024-11-18 06:49:11.701013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:18.646 [2024-11-18 06:49:11.701019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:18.646 [2024-11-18 06:49:11.701028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:18.646 [2024-11-18 06:49:11.701034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:18.646 [2024-11-18 06:49:11.701041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:18.646 [2024-11-18 06:49:11.701047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:18.646 [2024-11-18 06:49:11.701055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:18.646 [2024-11-18 06:49:11.701061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:18.646 [2024-11-18 06:49:11.701068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:18.646 [2024-11-18 06:49:11.701074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:18.646 [2024-11-18 06:49:11.701081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:18.646 [2024-11-18 06:49:11.701087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:18.646 [2024-11-18 06:49:11.701095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:18.646 [2024-11-18 06:49:11.701101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:18.646 [2024-11-18 06:49:11.701108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:18.646 [2024-11-18 06:49:11.701113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:18.646 [2024-11-18 06:49:11.701120] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:18.646 [2024-11-18 06:49:11.701127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:18.646 [2024-11-18 06:49:11.701143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:18.646 [2024-11-18 06:49:11.701151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:18.646 [2024-11-18 06:49:11.701159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:18.646 [2024-11-18 06:49:11.701164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:18.646 [2024-11-18 06:49:11.701172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:18.646 [2024-11-18 06:49:11.701178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:18.646 [2024-11-18 06:49:11.701185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:18.646 [2024-11-18 06:49:11.701191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:18.646 [2024-11-18 06:49:11.701201] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:18.646 [2024-11-18 06:49:11.701209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:18.646 [2024-11-18 06:49:11.701217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:18.646 [2024-11-18 06:49:11.701224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:18.646 [2024-11-18 06:49:11.701232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:18.646 [2024-11-18 06:49:11.701239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:18.646 [2024-11-18 06:49:11.701247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:18.646 [2024-11-18 06:49:11.701253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:18.646 [2024-11-18 06:49:11.701261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:18.646 [2024-11-18 06:49:11.701267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:18.646 [2024-11-18 06:49:11.701276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:18.646 [2024-11-18 06:49:11.701282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:18.646 [2024-11-18 06:49:11.701290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:18.646 [2024-11-18 06:49:11.701296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:18.646 [2024-11-18 06:49:11.701303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:18.646 [2024-11-18 06:49:11.701309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:18.646 [2024-11-18 06:49:11.701317] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:18.646 [2024-11-18 06:49:11.701323] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:18.646 [2024-11-18 06:49:11.701331] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:18.646 [2024-11-18 06:49:11.701337] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:18.646 [2024-11-18 06:49:11.701343] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:18.647 [2024-11-18 06:49:11.701349] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:18.647 [2024-11-18 06:49:11.701356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.647 [2024-11-18 06:49:11.701362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:18.647 [2024-11-18 06:49:11.701370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.602 ms 00:15:18.647 [2024-11-18 06:49:11.701376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.647 [2024-11-18 06:49:11.701436] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:18.647 [2024-11-18 06:49:11.701449] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:21.178 [2024-11-18 06:49:13.748535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.178 [2024-11-18 06:49:13.748586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:21.178 [2024-11-18 06:49:13.748602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2047.080 ms 00:15:21.178 [2024-11-18 06:49:13.748611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.178 [2024-11-18 06:49:13.757255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.178 [2024-11-18 06:49:13.757416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:21.178 [2024-11-18 06:49:13.757440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.558 ms 00:15:21.178 [2024-11-18 06:49:13.757449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.178 [2024-11-18 06:49:13.757547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.178 [2024-11-18 06:49:13.757556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:21.178 [2024-11-18 06:49:13.757567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:15:21.178 [2024-11-18 06:49:13.757586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.178 [2024-11-18 06:49:13.775633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.178 [2024-11-18 06:49:13.775677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:21.178 [2024-11-18 06:49:13.775705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.989 ms 00:15:21.178 [2024-11-18 06:49:13.775713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.178 [2024-11-18 06:49:13.775761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.178 [2024-11-18 06:49:13.775770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:21.178 [2024-11-18 06:49:13.775780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:21.178 [2024-11-18 06:49:13.775788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.178 [2024-11-18 06:49:13.776180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.178 [2024-11-18 06:49:13.776206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:21.178 [2024-11-18 06:49:13.776217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:15:21.178 [2024-11-18 06:49:13.776227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.178 [2024-11-18 06:49:13.776360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.178 [2024-11-18 06:49:13.776378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:21.178 [2024-11-18 06:49:13.776389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:15:21.178 [2024-11-18 06:49:13.776398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.178 [2024-11-18 06:49:13.782756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.178 [2024-11-18 06:49:13.782952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:21.178 [2024-11-18 06:49:13.783003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.325 ms 00:15:21.178 [2024-11-18 06:49:13.783016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.178 [2024-11-18 06:49:13.793550] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:21.178 [2024-11-18 06:49:13.808059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.178 [2024-11-18 06:49:13.808093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:21.178 [2024-11-18 06:49:13.808114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.944 ms 00:15:21.178 [2024-11-18 06:49:13.808123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.178 [2024-11-18 06:49:13.843616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.178 [2024-11-18 06:49:13.843755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:21.178 [2024-11-18 06:49:13.843782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.461 ms 00:15:21.178 [2024-11-18 06:49:13.843794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.178 [2024-11-18 06:49:13.843991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.178 [2024-11-18 06:49:13.844011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:21.178 [2024-11-18 06:49:13.844020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:15:21.178 [2024-11-18 06:49:13.844039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.178 [2024-11-18 06:49:13.846723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.178 [2024-11-18 06:49:13.846762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:21.178 [2024-11-18 06:49:13.846773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.653 ms 00:15:21.178 [2024-11-18 06:49:13.846784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.178 [2024-11-18 06:49:13.849151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.178 [2024-11-18 06:49:13.849264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:21.178 [2024-11-18 06:49:13.849346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.325 ms 00:15:21.178 [2024-11-18 06:49:13.849457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.178 [2024-11-18 06:49:13.850029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.178 [2024-11-18 06:49:13.850064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:21.178 [2024-11-18 06:49:13.850075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:15:21.178 [2024-11-18 06:49:13.850087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.178 [2024-11-18 06:49:13.871569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.178 [2024-11-18 06:49:13.871614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:21.178 [2024-11-18 06:49:13.871635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.427 ms 00:15:21.178 [2024-11-18 06:49:13.871645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.178 [2024-11-18 06:49:13.875333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.178 [2024-11-18 06:49:13.875373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:21.178 [2024-11-18 06:49:13.875394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.617 ms 00:15:21.178 [2024-11-18 06:49:13.875405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.178 [2024-11-18 06:49:13.878227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.178 [2024-11-18 06:49:13.878379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:21.178 [2024-11-18 06:49:13.878394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.777 ms 00:15:21.178 [2024-11-18 06:49:13.878403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.178 [2024-11-18 06:49:13.881302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.178 [2024-11-18 06:49:13.881338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:21.178 [2024-11-18 06:49:13.881348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.859 ms 00:15:21.178 [2024-11-18 06:49:13.881360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.178 [2024-11-18 06:49:13.881405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.178 [2024-11-18 06:49:13.881417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:21.178 [2024-11-18 06:49:13.881427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:21.178 [2024-11-18 06:49:13.881437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.178 [2024-11-18 06:49:13.881525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.178 [2024-11-18 06:49:13.881538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:21.178 [2024-11-18 06:49:13.881549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:15:21.178 [2024-11-18 06:49:13.881559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.178 [2024-11-18 06:49:13.882488] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2190.653 ms, result 0 00:15:21.178 { 00:15:21.178 "name": "ftl0", 00:15:21.178 "uuid": "d29e7f86-a2ab-48e2-9481-dfe891c12b2b" 00:15:21.178 } 00:15:21.178 06:49:13 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:21.178 06:49:13 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:15:21.178 06:49:13 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:21.178 06:49:13 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:15:21.178 06:49:13 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:21.178 06:49:13 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:21.178 06:49:13 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:21.178 06:49:14 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:21.437 [ 00:15:21.437 { 00:15:21.437 "name": "ftl0", 00:15:21.437 "aliases": [ 00:15:21.437 "d29e7f86-a2ab-48e2-9481-dfe891c12b2b" 00:15:21.437 ], 00:15:21.437 "product_name": "FTL disk", 00:15:21.437 "block_size": 4096, 00:15:21.437 "num_blocks": 20971520, 00:15:21.437 "uuid": "d29e7f86-a2ab-48e2-9481-dfe891c12b2b", 00:15:21.437 "assigned_rate_limits": { 00:15:21.437 "rw_ios_per_sec": 0, 00:15:21.437 "rw_mbytes_per_sec": 0, 00:15:21.437 "r_mbytes_per_sec": 0, 00:15:21.437 "w_mbytes_per_sec": 0 00:15:21.437 }, 00:15:21.437 "claimed": false, 00:15:21.437 "zoned": false, 00:15:21.437 "supported_io_types": { 00:15:21.437 "read": true, 00:15:21.437 "write": true, 00:15:21.437 "unmap": true, 00:15:21.437 "flush": true, 00:15:21.437 "reset": false, 00:15:21.437 "nvme_admin": false, 00:15:21.437 "nvme_io": false, 00:15:21.437 "nvme_io_md": false, 00:15:21.437 "write_zeroes": true, 00:15:21.437 "zcopy": false, 00:15:21.437 "get_zone_info": false, 00:15:21.437 "zone_management": false, 00:15:21.437 "zone_append": false, 00:15:21.437 "compare": false, 00:15:21.437 "compare_and_write": false, 00:15:21.437 "abort": false, 00:15:21.437 "seek_hole": false, 00:15:21.437 "seek_data": false, 00:15:21.437 "copy": false, 00:15:21.437 "nvme_iov_md": false 00:15:21.437 }, 00:15:21.437 "driver_specific": { 00:15:21.437 "ftl": { 00:15:21.437 "base_bdev": "9453657c-7567-4914-878a-f906fa3d4f7d", 00:15:21.437 "cache": "nvc0n1p0" 00:15:21.437 } 00:15:21.437 } 00:15:21.437 } 00:15:21.437 ] 00:15:21.437 06:49:14 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:15:21.437 06:49:14 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:21.437 06:49:14 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:21.437 06:49:14 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:21.437 06:49:14 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:21.697 [2024-11-18 06:49:14.691679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.697 [2024-11-18 06:49:14.691722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:21.697 [2024-11-18 06:49:14.691736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:21.697 [2024-11-18 06:49:14.691744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.697 [2024-11-18 06:49:14.691780] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:21.697 [2024-11-18 06:49:14.692273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.697 [2024-11-18 06:49:14.692295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:21.697 [2024-11-18 06:49:14.692309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.479 ms 00:15:21.697 [2024-11-18 06:49:14.692318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.697 [2024-11-18 06:49:14.692791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.697 [2024-11-18 06:49:14.692827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:21.697 [2024-11-18 06:49:14.692837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:15:21.697 [2024-11-18 06:49:14.692847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.697 [2024-11-18 06:49:14.696102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.697 [2024-11-18 06:49:14.696131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:21.697 [2024-11-18 06:49:14.696149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.224 ms 00:15:21.697 [2024-11-18 06:49:14.696158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.697 [2024-11-18 06:49:14.702329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.697 [2024-11-18 06:49:14.702360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:21.697 [2024-11-18 06:49:14.702369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.143 ms 00:15:21.697 [2024-11-18 06:49:14.702378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.697 [2024-11-18 06:49:14.703866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.697 [2024-11-18 06:49:14.704007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:21.697 [2024-11-18 06:49:14.704022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.409 ms 00:15:21.697 [2024-11-18 06:49:14.704033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.697 [2024-11-18 06:49:14.708847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.697 [2024-11-18 06:49:14.708899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:21.697 [2024-11-18 06:49:14.708915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.767 ms 00:15:21.697 [2024-11-18 06:49:14.708925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.697 [2024-11-18 06:49:14.709111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.697 [2024-11-18 06:49:14.709124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:21.697 [2024-11-18 06:49:14.709133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:15:21.697 [2024-11-18 06:49:14.709142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.697 [2024-11-18 06:49:14.710501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.697 [2024-11-18 06:49:14.710536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:21.697 [2024-11-18 06:49:14.710545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.332 ms 00:15:21.697 [2024-11-18 06:49:14.710553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.697 [2024-11-18 06:49:14.711675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.697 [2024-11-18 06:49:14.711801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:21.697 [2024-11-18 06:49:14.711815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.084 ms 00:15:21.697 [2024-11-18 06:49:14.711824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.697 [2024-11-18 06:49:14.712710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.697 [2024-11-18 06:49:14.712742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:21.697 [2024-11-18 06:49:14.712751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.850 ms 00:15:21.697 [2024-11-18 06:49:14.712759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.697 [2024-11-18 06:49:14.713749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.697 [2024-11-18 06:49:14.713782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:21.697 [2024-11-18 06:49:14.713791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.905 ms 00:15:21.697 [2024-11-18 06:49:14.713800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.697 [2024-11-18 06:49:14.713836] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:21.697 [2024-11-18 06:49:14.713854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:21.697 [2024-11-18 06:49:14.713863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:21.697 [2024-11-18 06:49:14.713873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.713880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.713891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.713898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.713907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.713915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.713924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.713932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.713941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.713948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.713957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.713964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:21.698 [2024-11-18 06:49:14.714523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:21.699 [2024-11-18 06:49:14.714532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:21.699 [2024-11-18 06:49:14.714540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:21.699 [2024-11-18 06:49:14.714549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:21.699 [2024-11-18 06:49:14.714556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:21.699 [2024-11-18 06:49:14.714566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:21.699 [2024-11-18 06:49:14.714573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:21.699 [2024-11-18 06:49:14.714582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:21.699 [2024-11-18 06:49:14.714590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:21.699 [2024-11-18 06:49:14.714600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:21.699 [2024-11-18 06:49:14.714608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:21.699 [2024-11-18 06:49:14.714629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:21.699 [2024-11-18 06:49:14.714637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:21.699 [2024-11-18 06:49:14.714646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:21.699 [2024-11-18 06:49:14.714653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:21.699 [2024-11-18 06:49:14.714662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:21.699 [2024-11-18 06:49:14.714669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:21.699 [2024-11-18 06:49:14.714678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:21.699 [2024-11-18 06:49:14.714685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:21.699 [2024-11-18 06:49:14.714694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:21.699 [2024-11-18 06:49:14.714703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:21.699 [2024-11-18 06:49:14.714713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:21.699 [2024-11-18 06:49:14.714720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:21.699 [2024-11-18 06:49:14.714730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:21.699 [2024-11-18 06:49:14.714738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:21.699 [2024-11-18 06:49:14.714756] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:21.699 [2024-11-18 06:49:14.714764] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d29e7f86-a2ab-48e2-9481-dfe891c12b2b 00:15:21.699 [2024-11-18 06:49:14.714786] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:21.699 [2024-11-18 06:49:14.714793] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:21.699 [2024-11-18 06:49:14.714802] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:21.699 [2024-11-18 06:49:14.714809] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:21.699 [2024-11-18 06:49:14.714817] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:21.699 [2024-11-18 06:49:14.714825] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:21.699 [2024-11-18 06:49:14.714834] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:21.699 [2024-11-18 06:49:14.714841] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:21.699 [2024-11-18 06:49:14.714849] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:21.699 [2024-11-18 06:49:14.714856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.699 [2024-11-18 06:49:14.714874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:21.699 [2024-11-18 06:49:14.714882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.021 ms 00:15:21.699 [2024-11-18 06:49:14.714890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.699 [2024-11-18 06:49:14.716752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.699 [2024-11-18 06:49:14.716850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:21.699 [2024-11-18 06:49:14.716907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.833 ms 00:15:21.699 [2024-11-18 06:49:14.716931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.699 [2024-11-18 06:49:14.717061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.699 [2024-11-18 06:49:14.717166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:21.699 [2024-11-18 06:49:14.717191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:15:21.699 [2024-11-18 06:49:14.717226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.699 [2024-11-18 06:49:14.722696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:21.699 [2024-11-18 06:49:14.722803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:21.699 [2024-11-18 06:49:14.722856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:21.699 [2024-11-18 06:49:14.722914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.699 [2024-11-18 06:49:14.723001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:21.699 [2024-11-18 06:49:14.723037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:21.699 [2024-11-18 06:49:14.723087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:21.699 [2024-11-18 06:49:14.723114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.699 [2024-11-18 06:49:14.723230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:21.699 [2024-11-18 06:49:14.723302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:21.699 [2024-11-18 06:49:14.723361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:21.699 [2024-11-18 06:49:14.723385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.699 [2024-11-18 06:49:14.723499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:21.699 [2024-11-18 06:49:14.723529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:21.699 [2024-11-18 06:49:14.723598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:21.699 [2024-11-18 06:49:14.723622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.699 [2024-11-18 06:49:14.733190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:21.699 [2024-11-18 06:49:14.733307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:21.699 [2024-11-18 06:49:14.733358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:21.699 [2024-11-18 06:49:14.733381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.699 [2024-11-18 06:49:14.741386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:21.699 [2024-11-18 06:49:14.741516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:21.699 [2024-11-18 06:49:14.741569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:21.699 [2024-11-18 06:49:14.741582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.699 [2024-11-18 06:49:14.741654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:21.699 [2024-11-18 06:49:14.741668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:21.699 [2024-11-18 06:49:14.741676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:21.699 [2024-11-18 06:49:14.741685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.699 [2024-11-18 06:49:14.741737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:21.699 [2024-11-18 06:49:14.741748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:21.700 [2024-11-18 06:49:14.741755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:21.700 [2024-11-18 06:49:14.741764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.700 [2024-11-18 06:49:14.741847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:21.700 [2024-11-18 06:49:14.741860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:21.700 [2024-11-18 06:49:14.741868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:21.700 [2024-11-18 06:49:14.741877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.700 [2024-11-18 06:49:14.741920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:21.700 [2024-11-18 06:49:14.741932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:21.700 [2024-11-18 06:49:14.741940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:21.700 [2024-11-18 06:49:14.741948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.700 [2024-11-18 06:49:14.742008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:21.700 [2024-11-18 06:49:14.742023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:21.700 [2024-11-18 06:49:14.742031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:21.700 [2024-11-18 06:49:14.742040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.700 [2024-11-18 06:49:14.742094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:21.700 [2024-11-18 06:49:14.742105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:21.700 [2024-11-18 06:49:14.742113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:21.700 [2024-11-18 06:49:14.742122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.700 [2024-11-18 06:49:14.742280] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 50.580 ms, result 0 00:15:21.700 true 00:15:21.700 06:49:14 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 83854 00:15:21.700 06:49:14 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 83854 ']' 00:15:21.700 06:49:14 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 83854 00:15:21.700 06:49:14 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:15:21.700 06:49:14 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:21.700 06:49:14 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83854 00:15:22.018 killing process with pid 83854 00:15:22.018 06:49:14 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:22.018 06:49:14 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:22.018 06:49:14 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83854' 00:15:22.018 06:49:14 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 83854 00:15:22.018 06:49:14 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 83854 00:15:26.207 06:49:19 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:26.207 06:49:19 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:26.207 06:49:19 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:26.207 06:49:19 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:26.207 06:49:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:26.207 06:49:19 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:26.207 06:49:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:26.207 06:49:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:26.207 06:49:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:26.207 06:49:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:26.207 06:49:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:26.207 06:49:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:15:26.207 06:49:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:26.207 06:49:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:26.207 06:49:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:26.207 06:49:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:15:26.207 06:49:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:26.207 06:49:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:26.207 06:49:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:26.207 06:49:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:15:26.207 06:49:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:26.207 06:49:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:26.469 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:26.469 fio-3.35 00:15:26.469 Starting 1 thread 00:15:29.774 00:15:29.774 test: (groupid=0, jobs=1): err= 0: pid=84023: Mon Nov 18 06:49:22 2024 00:15:29.774 read: IOPS=1388, BW=92.2MiB/s (96.7MB/s)(255MiB/2761msec) 00:15:29.774 slat (nsec): min=2955, max=18957, avg=3812.04, stdev=1617.63 00:15:29.774 clat (usec): min=233, max=2825, avg=326.32, stdev=72.26 00:15:29.774 lat (usec): min=236, max=2829, avg=330.13, stdev=72.68 00:15:29.774 clat percentiles (usec): 00:15:29.774 | 1.00th=[ 262], 5.00th=[ 277], 10.00th=[ 281], 20.00th=[ 302], 00:15:29.774 | 30.00th=[ 306], 40.00th=[ 310], 50.00th=[ 314], 60.00th=[ 314], 00:15:29.774 | 70.00th=[ 322], 80.00th=[ 330], 90.00th=[ 404], 95.00th=[ 437], 00:15:29.774 | 99.00th=[ 586], 99.50th=[ 693], 99.90th=[ 898], 99.95th=[ 922], 00:15:29.774 | 99.99th=[ 2835] 00:15:29.774 write: IOPS=1398, BW=92.8MiB/s (97.4MB/s)(256MiB/2758msec); 0 zone resets 00:15:29.774 slat (usec): min=13, max=107, avg=17.01, stdev= 3.33 00:15:29.774 clat (usec): min=254, max=1057, avg=357.28, stdev=75.43 00:15:29.774 lat (usec): min=270, max=1084, avg=374.29, stdev=76.19 00:15:29.774 clat percentiles (usec): 00:15:29.774 | 1.00th=[ 289], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 330], 00:15:29.774 | 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 338], 60.00th=[ 343], 00:15:29.774 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 408], 95.00th=[ 478], 00:15:29.774 | 99.00th=[ 742], 99.50th=[ 840], 99.90th=[ 955], 99.95th=[ 979], 00:15:29.774 | 99.99th=[ 1057] 00:15:29.774 bw ( KiB/s): min=89760, max=102816, per=99.88%, avg=94955.20, stdev=4988.07, samples=5 00:15:29.774 iops : min= 1320, max= 1512, avg=1396.40, stdev=73.35, samples=5 00:15:29.774 lat (usec) : 250=0.23%, 500=96.98%, 750=2.13%, 1000=0.62% 00:15:29.774 lat (msec) : 2=0.01%, 4=0.01% 00:15:29.774 cpu : usr=99.38%, sys=0.00%, ctx=5, majf=0, minf=1181 00:15:29.774 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:29.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.774 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.774 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:29.774 00:15:29.774 Run status group 0 (all jobs): 00:15:29.774 READ: bw=92.2MiB/s (96.7MB/s), 92.2MiB/s-92.2MiB/s (96.7MB/s-96.7MB/s), io=255MiB (267MB), run=2761-2761msec 00:15:29.774 WRITE: bw=92.8MiB/s (97.4MB/s), 92.8MiB/s-92.8MiB/s (97.4MB/s-97.4MB/s), io=256MiB (269MB), run=2758-2758msec 00:15:30.738 ----------------------------------------------------- 00:15:30.738 Suppressions used: 00:15:30.738 count bytes template 00:15:30.738 1 5 /usr/src/fio/parse.c 00:15:30.738 1 8 libtcmalloc_minimal.so 00:15:30.738 1 904 libcrypto.so 00:15:30.738 ----------------------------------------------------- 00:15:30.738 00:15:30.738 06:49:23 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:30.738 06:49:23 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:30.738 06:49:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:30.738 06:49:23 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:30.738 06:49:23 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:30.738 06:49:23 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:30.738 06:49:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:30.738 06:49:23 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:30.738 06:49:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:30.738 06:49:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:30.738 06:49:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:30.738 06:49:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:30.738 06:49:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:30.738 06:49:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:15:30.738 06:49:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:30.738 06:49:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:30.738 06:49:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:30.738 06:49:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:15:30.738 06:49:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:30.738 06:49:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:30.738 06:49:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:30.738 06:49:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:15:30.738 06:49:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:30.738 06:49:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:30.738 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:30.738 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:30.738 fio-3.35 00:15:30.738 Starting 2 threads 00:15:57.399 00:15:57.399 first_half: (groupid=0, jobs=1): err= 0: pid=84106: Mon Nov 18 06:49:46 2024 00:15:57.399 read: IOPS=2932, BW=11.5MiB/s (12.0MB/s)(255MiB/22246msec) 00:15:57.399 slat (nsec): min=3055, max=38650, avg=4121.34, stdev=1262.11 00:15:57.399 clat (usec): min=583, max=347534, avg=34419.61, stdev=17026.17 00:15:57.399 lat (usec): min=588, max=347539, avg=34423.73, stdev=17026.32 00:15:57.399 clat percentiles (msec): 00:15:57.399 | 1.00th=[ 7], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 30], 00:15:57.399 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:15:57.399 | 70.00th=[ 33], 80.00th=[ 35], 90.00th=[ 39], 95.00th=[ 50], 00:15:57.399 | 99.00th=[ 130], 99.50th=[ 142], 99.90th=[ 176], 99.95th=[ 296], 00:15:57.399 | 99.99th=[ 338] 00:15:57.399 write: IOPS=3808, BW=14.9MiB/s (15.6MB/s)(256MiB/17206msec); 0 zone resets 00:15:57.399 slat (usec): min=3, max=988, avg= 5.88, stdev= 5.41 00:15:57.399 clat (usec): min=365, max=71478, avg=9156.85, stdev=14647.31 00:15:57.399 lat (usec): min=374, max=71482, avg=9162.73, stdev=14647.54 00:15:57.399 clat percentiles (usec): 00:15:57.399 | 1.00th=[ 652], 5.00th=[ 766], 10.00th=[ 906], 20.00th=[ 1139], 00:15:57.399 | 30.00th=[ 2073], 40.00th=[ 3458], 50.00th=[ 4817], 60.00th=[ 5407], 00:15:57.399 | 70.00th=[ 6128], 80.00th=[10159], 90.00th=[19530], 95.00th=[57934], 00:15:57.399 | 99.00th=[64750], 99.50th=[66323], 99.90th=[68682], 99.95th=[69731], 00:15:57.399 | 99.99th=[70779] 00:15:57.399 bw ( KiB/s): min= 9896, max=40528, per=100.00%, avg=27594.11, stdev=11241.04, samples=19 00:15:57.399 iops : min= 2474, max=10132, avg=6898.53, stdev=2810.26, samples=19 00:15:57.399 lat (usec) : 500=0.05%, 750=2.11%, 1000=4.78% 00:15:57.399 lat (msec) : 2=8.12%, 4=6.73%, 10=18.54%, 20=5.98%, 50=48.04% 00:15:57.399 lat (msec) : 100=4.79%, 250=0.82%, 500=0.04% 00:15:57.399 cpu : usr=99.34%, sys=0.13%, ctx=52, majf=0, minf=5605 00:15:57.399 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:57.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.399 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:57.399 issued rwts: total=65240,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.399 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:57.399 second_half: (groupid=0, jobs=1): err= 0: pid=84107: Mon Nov 18 06:49:46 2024 00:15:57.399 read: IOPS=2899, BW=11.3MiB/s (11.9MB/s)(255MiB/22504msec) 00:15:57.399 slat (nsec): min=2995, max=48028, avg=5269.93, stdev=1404.20 00:15:57.399 clat (usec): min=655, max=353552, avg=34034.57, stdev=18399.63 00:15:57.399 lat (usec): min=660, max=353558, avg=34039.84, stdev=18399.73 00:15:57.399 clat percentiles (msec): 00:15:57.399 | 1.00th=[ 9], 5.00th=[ 25], 10.00th=[ 30], 20.00th=[ 30], 00:15:57.399 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:15:57.399 | 70.00th=[ 33], 80.00th=[ 35], 90.00th=[ 38], 95.00th=[ 45], 00:15:57.399 | 99.00th=[ 134], 99.50th=[ 148], 99.90th=[ 205], 99.95th=[ 268], 00:15:57.399 | 99.99th=[ 351] 00:15:57.399 write: IOPS=3423, BW=13.4MiB/s (14.0MB/s)(256MiB/19145msec); 0 zone resets 00:15:57.399 slat (usec): min=3, max=767, avg= 6.93, stdev= 4.82 00:15:57.399 clat (usec): min=377, max=71639, avg=10046.43, stdev=15566.64 00:15:57.399 lat (usec): min=384, max=71644, avg=10053.36, stdev=15566.91 00:15:57.399 clat percentiles (usec): 00:15:57.399 | 1.00th=[ 660], 5.00th=[ 750], 10.00th=[ 832], 20.00th=[ 1090], 00:15:57.399 | 30.00th=[ 2114], 40.00th=[ 3097], 50.00th=[ 4293], 60.00th=[ 5342], 00:15:57.399 | 70.00th=[ 6456], 80.00th=[13566], 90.00th=[28181], 95.00th=[58459], 00:15:57.399 | 99.00th=[65274], 99.50th=[66847], 99.90th=[69731], 99.95th=[69731], 00:15:57.399 | 99.99th=[70779] 00:15:57.399 bw ( KiB/s): min= 616, max=63384, per=79.78%, avg=21847.67, stdev=14293.20, samples=24 00:15:57.399 iops : min= 154, max=15846, avg=5461.92, stdev=3573.30, samples=24 00:15:57.399 lat (usec) : 500=0.02%, 750=2.62%, 1000=5.84% 00:15:57.399 lat (msec) : 2=6.26%, 4=9.31%, 10=14.81%, 20=6.63%, 50=49.07% 00:15:57.399 lat (msec) : 100=4.37%, 250=1.04%, 500=0.03% 00:15:57.399 cpu : usr=99.24%, sys=0.15%, ctx=31, majf=0, minf=5535 00:15:57.399 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:57.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.399 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:57.399 issued rwts: total=65251,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.399 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:57.399 00:15:57.399 Run status group 0 (all jobs): 00:15:57.399 READ: bw=22.7MiB/s (23.8MB/s), 11.3MiB/s-11.5MiB/s (11.9MB/s-12.0MB/s), io=510MiB (534MB), run=22246-22504msec 00:15:57.399 WRITE: bw=26.7MiB/s (28.0MB/s), 13.4MiB/s-14.9MiB/s (14.0MB/s-15.6MB/s), io=512MiB (537MB), run=17206-19145msec 00:15:57.399 ----------------------------------------------------- 00:15:57.399 Suppressions used: 00:15:57.399 count bytes template 00:15:57.399 2 10 /usr/src/fio/parse.c 00:15:57.399 2 192 /usr/src/fio/iolog.c 00:15:57.399 1 8 libtcmalloc_minimal.so 00:15:57.399 1 904 libcrypto.so 00:15:57.399 ----------------------------------------------------- 00:15:57.399 00:15:57.399 06:49:47 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:15:57.399 06:49:47 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:57.399 06:49:47 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:57.399 06:49:47 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:57.399 06:49:47 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:15:57.399 06:49:47 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:57.399 06:49:47 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:57.399 06:49:47 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:57.399 06:49:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:57.399 06:49:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:57.399 06:49:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:57.399 06:49:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:57.399 06:49:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:57.399 06:49:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:15:57.399 06:49:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:57.399 06:49:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:57.399 06:49:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:57.399 06:49:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:57.399 06:49:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:15:57.399 06:49:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:57.399 06:49:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:57.399 06:49:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:15:57.399 06:49:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:57.399 06:49:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:57.399 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:57.399 fio-3.35 00:15:57.399 Starting 1 thread 00:16:12.313 00:16:12.313 test: (groupid=0, jobs=1): err= 0: pid=84397: Mon Nov 18 06:50:03 2024 00:16:12.313 read: IOPS=6992, BW=27.3MiB/s (28.6MB/s)(255MiB/9325msec) 00:16:12.313 slat (nsec): min=2893, max=35078, avg=5228.73, stdev=2162.50 00:16:12.313 clat (usec): min=475, max=37118, avg=18296.44, stdev=3070.67 00:16:12.313 lat (usec): min=481, max=37123, avg=18301.67, stdev=3071.43 00:16:12.313 clat percentiles (usec): 00:16:12.313 | 1.00th=[14091], 5.00th=[14353], 10.00th=[14746], 20.00th=[15270], 00:16:12.313 | 30.00th=[15926], 40.00th=[17171], 50.00th=[18220], 60.00th=[19006], 00:16:12.313 | 70.00th=[19792], 80.00th=[20579], 90.00th=[22414], 95.00th=[23987], 00:16:12.313 | 99.00th=[26608], 99.50th=[28181], 99.90th=[30278], 99.95th=[33817], 00:16:12.313 | 99.99th=[36963] 00:16:12.313 write: IOPS=12.3k, BW=48.1MiB/s (50.4MB/s)(256MiB/5327msec); 0 zone resets 00:16:12.313 slat (usec): min=4, max=780, avg= 6.17, stdev= 4.56 00:16:12.313 clat (usec): min=464, max=58602, avg=10344.51, stdev=12593.09 00:16:12.313 lat (usec): min=471, max=58607, avg=10350.68, stdev=12593.13 00:16:12.313 clat percentiles (usec): 00:16:12.313 | 1.00th=[ 717], 5.00th=[ 914], 10.00th=[ 1045], 20.00th=[ 1336], 00:16:12.313 | 30.00th=[ 1713], 40.00th=[ 2638], 50.00th=[ 6849], 60.00th=[ 8160], 00:16:12.313 | 70.00th=[ 9765], 80.00th=[12387], 90.00th=[34341], 95.00th=[38536], 00:16:12.313 | 99.00th=[50594], 99.50th=[52691], 99.90th=[55313], 99.95th=[55837], 00:16:12.313 | 99.99th=[57410] 00:16:12.313 bw ( KiB/s): min=27880, max=69576, per=96.85%, avg=47662.55, stdev=11854.44, samples=11 00:16:12.313 iops : min= 6970, max=17394, avg=11915.64, stdev=2963.61, samples=11 00:16:12.313 lat (usec) : 500=0.01%, 750=0.72%, 1000=3.35% 00:16:12.313 lat (msec) : 2=13.76%, 4=3.02%, 10=14.82%, 20=42.97%, 50=20.78% 00:16:12.313 lat (msec) : 100=0.59% 00:16:12.313 cpu : usr=99.00%, sys=0.20%, ctx=31, majf=0, minf=5577 00:16:12.313 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:12.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.313 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:12.313 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.313 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:12.313 00:16:12.313 Run status group 0 (all jobs): 00:16:12.313 READ: bw=27.3MiB/s (28.6MB/s), 27.3MiB/s-27.3MiB/s (28.6MB/s-28.6MB/s), io=255MiB (267MB), run=9325-9325msec 00:16:12.313 WRITE: bw=48.1MiB/s (50.4MB/s), 48.1MiB/s-48.1MiB/s (50.4MB/s-50.4MB/s), io=256MiB (268MB), run=5327-5327msec 00:16:12.313 ----------------------------------------------------- 00:16:12.313 Suppressions used: 00:16:12.313 count bytes template 00:16:12.313 1 5 /usr/src/fio/parse.c 00:16:12.313 2 192 /usr/src/fio/iolog.c 00:16:12.313 1 8 libtcmalloc_minimal.so 00:16:12.313 1 904 libcrypto.so 00:16:12.313 ----------------------------------------------------- 00:16:12.313 00:16:12.313 06:50:04 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:12.313 06:50:04 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:12.313 06:50:04 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:12.313 06:50:04 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:12.313 Remove shared memory files 00:16:12.313 06:50:04 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:12.313 06:50:04 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:12.313 06:50:04 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:12.313 06:50:04 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:12.313 06:50:04 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid69377 /dev/shm/spdk_tgt_trace.pid82807 00:16:12.313 06:50:04 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:12.313 06:50:04 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:12.313 ************************************ 00:16:12.313 END TEST ftl_fio_basic 00:16:12.313 ************************************ 00:16:12.313 00:16:12.313 real 0m56.253s 00:16:12.313 user 2m6.567s 00:16:12.313 sys 0m2.723s 00:16:12.313 06:50:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:12.313 06:50:04 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:12.313 06:50:04 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:12.313 06:50:04 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:12.313 06:50:04 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:12.313 06:50:04 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:12.313 ************************************ 00:16:12.313 START TEST ftl_bdevperf 00:16:12.313 ************************************ 00:16:12.313 06:50:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:12.313 * Looking for test storage... 00:16:12.313 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:12.313 06:50:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:12.313 06:50:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:16:12.313 06:50:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:12.313 06:50:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:12.313 06:50:04 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:12.313 06:50:04 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:12.313 06:50:04 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:12.313 06:50:04 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:12.313 06:50:04 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:12.313 06:50:04 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:12.313 06:50:04 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:12.313 06:50:04 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:12.313 06:50:04 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:12.313 06:50:04 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:12.313 06:50:04 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:12.313 06:50:04 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:12.313 06:50:04 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:12.313 06:50:04 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:12.313 06:50:04 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:12.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.314 --rc genhtml_branch_coverage=1 00:16:12.314 --rc genhtml_function_coverage=1 00:16:12.314 --rc genhtml_legend=1 00:16:12.314 --rc geninfo_all_blocks=1 00:16:12.314 --rc geninfo_unexecuted_blocks=1 00:16:12.314 00:16:12.314 ' 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:12.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.314 --rc genhtml_branch_coverage=1 00:16:12.314 --rc genhtml_function_coverage=1 00:16:12.314 --rc genhtml_legend=1 00:16:12.314 --rc geninfo_all_blocks=1 00:16:12.314 --rc geninfo_unexecuted_blocks=1 00:16:12.314 00:16:12.314 ' 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:12.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.314 --rc genhtml_branch_coverage=1 00:16:12.314 --rc genhtml_function_coverage=1 00:16:12.314 --rc genhtml_legend=1 00:16:12.314 --rc geninfo_all_blocks=1 00:16:12.314 --rc geninfo_unexecuted_blocks=1 00:16:12.314 00:16:12.314 ' 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:12.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.314 --rc genhtml_branch_coverage=1 00:16:12.314 --rc genhtml_function_coverage=1 00:16:12.314 --rc genhtml_legend=1 00:16:12.314 --rc geninfo_all_blocks=1 00:16:12.314 --rc geninfo_unexecuted_blocks=1 00:16:12.314 00:16:12.314 ' 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=84635 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 84635 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 84635 ']' 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:12.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:12.314 06:50:04 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:12.314 [2024-11-18 06:50:04.555859] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:16:12.314 [2024-11-18 06:50:04.556155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84635 ] 00:16:12.314 [2024-11-18 06:50:04.715200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.314 [2024-11-18 06:50:04.734707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.314 06:50:05 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:12.314 06:50:05 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:16:12.314 06:50:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:12.314 06:50:05 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:12.314 06:50:05 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:12.314 06:50:05 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:12.314 06:50:05 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:12.314 06:50:05 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:12.889 06:50:05 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:12.889 06:50:05 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:12.889 06:50:05 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:12.889 06:50:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:16:12.889 06:50:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:12.889 06:50:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:16:12.889 06:50:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:16:12.889 06:50:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:12.889 06:50:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:12.889 { 00:16:12.889 "name": "nvme0n1", 00:16:12.889 "aliases": [ 00:16:12.889 "26d6489d-b549-441e-8d57-9b0605e8ee92" 00:16:12.889 ], 00:16:12.889 "product_name": "NVMe disk", 00:16:12.889 "block_size": 4096, 00:16:12.889 "num_blocks": 1310720, 00:16:12.889 "uuid": "26d6489d-b549-441e-8d57-9b0605e8ee92", 00:16:12.889 "numa_id": -1, 00:16:12.889 "assigned_rate_limits": { 00:16:12.889 "rw_ios_per_sec": 0, 00:16:12.889 "rw_mbytes_per_sec": 0, 00:16:12.889 "r_mbytes_per_sec": 0, 00:16:12.889 "w_mbytes_per_sec": 0 00:16:12.889 }, 00:16:12.889 "claimed": true, 00:16:12.889 "claim_type": "read_many_write_one", 00:16:12.889 "zoned": false, 00:16:12.889 "supported_io_types": { 00:16:12.889 "read": true, 00:16:12.889 "write": true, 00:16:12.889 "unmap": true, 00:16:12.889 "flush": true, 00:16:12.889 "reset": true, 00:16:12.889 "nvme_admin": true, 00:16:12.889 "nvme_io": true, 00:16:12.889 "nvme_io_md": false, 00:16:12.889 "write_zeroes": true, 00:16:12.889 "zcopy": false, 00:16:12.889 "get_zone_info": false, 00:16:12.889 "zone_management": false, 00:16:12.889 "zone_append": false, 00:16:12.889 "compare": true, 00:16:12.889 "compare_and_write": false, 00:16:12.889 "abort": true, 00:16:12.889 "seek_hole": false, 00:16:12.889 "seek_data": false, 00:16:12.889 "copy": true, 00:16:12.889 "nvme_iov_md": false 00:16:12.889 }, 00:16:12.889 "driver_specific": { 00:16:12.889 "nvme": [ 00:16:12.889 { 00:16:12.889 "pci_address": "0000:00:11.0", 00:16:12.889 "trid": { 00:16:12.889 "trtype": "PCIe", 00:16:12.889 "traddr": "0000:00:11.0" 00:16:12.889 }, 00:16:12.889 "ctrlr_data": { 00:16:12.889 "cntlid": 0, 00:16:12.889 "vendor_id": "0x1b36", 00:16:12.889 "model_number": "QEMU NVMe Ctrl", 00:16:12.889 "serial_number": "12341", 00:16:12.889 "firmware_revision": "8.0.0", 00:16:12.889 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:12.889 "oacs": { 00:16:12.889 "security": 0, 00:16:12.889 "format": 1, 00:16:12.889 "firmware": 0, 00:16:12.889 "ns_manage": 1 00:16:12.889 }, 00:16:12.889 "multi_ctrlr": false, 00:16:12.889 "ana_reporting": false 00:16:12.889 }, 00:16:12.889 "vs": { 00:16:12.889 "nvme_version": "1.4" 00:16:12.889 }, 00:16:12.889 "ns_data": { 00:16:12.889 "id": 1, 00:16:12.889 "can_share": false 00:16:12.889 } 00:16:12.889 } 00:16:12.889 ], 00:16:12.889 "mp_policy": "active_passive" 00:16:12.889 } 00:16:12.889 } 00:16:12.889 ]' 00:16:12.889 06:50:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:12.889 06:50:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:16:12.889 06:50:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:12.889 06:50:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:16:12.889 06:50:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:16:12.889 06:50:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:16:12.889 06:50:05 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:12.889 06:50:05 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:12.889 06:50:05 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:12.889 06:50:05 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:12.889 06:50:05 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:13.150 06:50:06 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=782fa865-e22d-44a2-bc02-1c355c4dea3b 00:16:13.150 06:50:06 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:13.150 06:50:06 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 782fa865-e22d-44a2-bc02-1c355c4dea3b 00:16:13.412 06:50:06 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:13.673 06:50:06 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=33e5fa32-d8d5-4074-a632-d7f198c55eff 00:16:13.673 06:50:06 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 33e5fa32-d8d5-4074-a632-d7f198c55eff 00:16:13.935 06:50:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=23b3d985-482b-4eaa-a7b6-7624470d85dc 00:16:13.935 06:50:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 23b3d985-482b-4eaa-a7b6-7624470d85dc 00:16:13.935 06:50:06 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:13.935 06:50:06 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:13.935 06:50:06 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=23b3d985-482b-4eaa-a7b6-7624470d85dc 00:16:13.935 06:50:06 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:13.935 06:50:06 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 23b3d985-482b-4eaa-a7b6-7624470d85dc 00:16:13.935 06:50:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=23b3d985-482b-4eaa-a7b6-7624470d85dc 00:16:13.935 06:50:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:13.935 06:50:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:16:13.935 06:50:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:16:13.935 06:50:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 23b3d985-482b-4eaa-a7b6-7624470d85dc 00:16:14.197 06:50:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:14.197 { 00:16:14.197 "name": "23b3d985-482b-4eaa-a7b6-7624470d85dc", 00:16:14.197 "aliases": [ 00:16:14.197 "lvs/nvme0n1p0" 00:16:14.197 ], 00:16:14.197 "product_name": "Logical Volume", 00:16:14.197 "block_size": 4096, 00:16:14.197 "num_blocks": 26476544, 00:16:14.197 "uuid": "23b3d985-482b-4eaa-a7b6-7624470d85dc", 00:16:14.197 "assigned_rate_limits": { 00:16:14.197 "rw_ios_per_sec": 0, 00:16:14.197 "rw_mbytes_per_sec": 0, 00:16:14.197 "r_mbytes_per_sec": 0, 00:16:14.197 "w_mbytes_per_sec": 0 00:16:14.197 }, 00:16:14.197 "claimed": false, 00:16:14.197 "zoned": false, 00:16:14.197 "supported_io_types": { 00:16:14.197 "read": true, 00:16:14.197 "write": true, 00:16:14.197 "unmap": true, 00:16:14.197 "flush": false, 00:16:14.197 "reset": true, 00:16:14.197 "nvme_admin": false, 00:16:14.197 "nvme_io": false, 00:16:14.197 "nvme_io_md": false, 00:16:14.197 "write_zeroes": true, 00:16:14.197 "zcopy": false, 00:16:14.197 "get_zone_info": false, 00:16:14.197 "zone_management": false, 00:16:14.197 "zone_append": false, 00:16:14.197 "compare": false, 00:16:14.197 "compare_and_write": false, 00:16:14.197 "abort": false, 00:16:14.197 "seek_hole": true, 00:16:14.197 "seek_data": true, 00:16:14.197 "copy": false, 00:16:14.197 "nvme_iov_md": false 00:16:14.197 }, 00:16:14.197 "driver_specific": { 00:16:14.197 "lvol": { 00:16:14.197 "lvol_store_uuid": "33e5fa32-d8d5-4074-a632-d7f198c55eff", 00:16:14.197 "base_bdev": "nvme0n1", 00:16:14.197 "thin_provision": true, 00:16:14.197 "num_allocated_clusters": 0, 00:16:14.197 "snapshot": false, 00:16:14.197 "clone": false, 00:16:14.197 "esnap_clone": false 00:16:14.197 } 00:16:14.197 } 00:16:14.197 } 00:16:14.197 ]' 00:16:14.197 06:50:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:14.197 06:50:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:16:14.197 06:50:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:14.197 06:50:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:14.197 06:50:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:14.197 06:50:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:16:14.197 06:50:07 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:14.197 06:50:07 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:14.197 06:50:07 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:14.458 06:50:07 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:14.458 06:50:07 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:14.458 06:50:07 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 23b3d985-482b-4eaa-a7b6-7624470d85dc 00:16:14.458 06:50:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=23b3d985-482b-4eaa-a7b6-7624470d85dc 00:16:14.458 06:50:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:14.458 06:50:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:16:14.458 06:50:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:16:14.458 06:50:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 23b3d985-482b-4eaa-a7b6-7624470d85dc 00:16:14.719 06:50:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:14.719 { 00:16:14.719 "name": "23b3d985-482b-4eaa-a7b6-7624470d85dc", 00:16:14.720 "aliases": [ 00:16:14.720 "lvs/nvme0n1p0" 00:16:14.720 ], 00:16:14.720 "product_name": "Logical Volume", 00:16:14.720 "block_size": 4096, 00:16:14.720 "num_blocks": 26476544, 00:16:14.720 "uuid": "23b3d985-482b-4eaa-a7b6-7624470d85dc", 00:16:14.720 "assigned_rate_limits": { 00:16:14.720 "rw_ios_per_sec": 0, 00:16:14.720 "rw_mbytes_per_sec": 0, 00:16:14.720 "r_mbytes_per_sec": 0, 00:16:14.720 "w_mbytes_per_sec": 0 00:16:14.720 }, 00:16:14.720 "claimed": false, 00:16:14.720 "zoned": false, 00:16:14.720 "supported_io_types": { 00:16:14.720 "read": true, 00:16:14.720 "write": true, 00:16:14.720 "unmap": true, 00:16:14.720 "flush": false, 00:16:14.720 "reset": true, 00:16:14.720 "nvme_admin": false, 00:16:14.720 "nvme_io": false, 00:16:14.720 "nvme_io_md": false, 00:16:14.720 "write_zeroes": true, 00:16:14.720 "zcopy": false, 00:16:14.720 "get_zone_info": false, 00:16:14.720 "zone_management": false, 00:16:14.720 "zone_append": false, 00:16:14.720 "compare": false, 00:16:14.720 "compare_and_write": false, 00:16:14.720 "abort": false, 00:16:14.720 "seek_hole": true, 00:16:14.720 "seek_data": true, 00:16:14.720 "copy": false, 00:16:14.720 "nvme_iov_md": false 00:16:14.720 }, 00:16:14.720 "driver_specific": { 00:16:14.720 "lvol": { 00:16:14.720 "lvol_store_uuid": "33e5fa32-d8d5-4074-a632-d7f198c55eff", 00:16:14.720 "base_bdev": "nvme0n1", 00:16:14.720 "thin_provision": true, 00:16:14.720 "num_allocated_clusters": 0, 00:16:14.720 "snapshot": false, 00:16:14.720 "clone": false, 00:16:14.720 "esnap_clone": false 00:16:14.720 } 00:16:14.720 } 00:16:14.720 } 00:16:14.720 ]' 00:16:14.720 06:50:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:14.720 06:50:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:16:14.720 06:50:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:14.720 06:50:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:14.720 06:50:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:14.720 06:50:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:16:14.720 06:50:07 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:14.720 06:50:07 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:14.980 06:50:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:16:14.980 06:50:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 23b3d985-482b-4eaa-a7b6-7624470d85dc 00:16:14.980 06:50:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=23b3d985-482b-4eaa-a7b6-7624470d85dc 00:16:14.980 06:50:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:14.980 06:50:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:16:14.980 06:50:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:16:14.980 06:50:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 23b3d985-482b-4eaa-a7b6-7624470d85dc 00:16:15.241 06:50:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:15.241 { 00:16:15.241 "name": "23b3d985-482b-4eaa-a7b6-7624470d85dc", 00:16:15.241 "aliases": [ 00:16:15.241 "lvs/nvme0n1p0" 00:16:15.241 ], 00:16:15.241 "product_name": "Logical Volume", 00:16:15.241 "block_size": 4096, 00:16:15.241 "num_blocks": 26476544, 00:16:15.241 "uuid": "23b3d985-482b-4eaa-a7b6-7624470d85dc", 00:16:15.241 "assigned_rate_limits": { 00:16:15.241 "rw_ios_per_sec": 0, 00:16:15.241 "rw_mbytes_per_sec": 0, 00:16:15.241 "r_mbytes_per_sec": 0, 00:16:15.241 "w_mbytes_per_sec": 0 00:16:15.241 }, 00:16:15.241 "claimed": false, 00:16:15.241 "zoned": false, 00:16:15.241 "supported_io_types": { 00:16:15.241 "read": true, 00:16:15.241 "write": true, 00:16:15.241 "unmap": true, 00:16:15.241 "flush": false, 00:16:15.241 "reset": true, 00:16:15.241 "nvme_admin": false, 00:16:15.241 "nvme_io": false, 00:16:15.241 "nvme_io_md": false, 00:16:15.241 "write_zeroes": true, 00:16:15.241 "zcopy": false, 00:16:15.241 "get_zone_info": false, 00:16:15.241 "zone_management": false, 00:16:15.241 "zone_append": false, 00:16:15.241 "compare": false, 00:16:15.241 "compare_and_write": false, 00:16:15.241 "abort": false, 00:16:15.241 "seek_hole": true, 00:16:15.241 "seek_data": true, 00:16:15.241 "copy": false, 00:16:15.241 "nvme_iov_md": false 00:16:15.241 }, 00:16:15.241 "driver_specific": { 00:16:15.241 "lvol": { 00:16:15.241 "lvol_store_uuid": "33e5fa32-d8d5-4074-a632-d7f198c55eff", 00:16:15.241 "base_bdev": "nvme0n1", 00:16:15.241 "thin_provision": true, 00:16:15.241 "num_allocated_clusters": 0, 00:16:15.241 "snapshot": false, 00:16:15.241 "clone": false, 00:16:15.241 "esnap_clone": false 00:16:15.241 } 00:16:15.241 } 00:16:15.241 } 00:16:15.241 ]' 00:16:15.241 06:50:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:15.241 06:50:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:16:15.241 06:50:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:15.241 06:50:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:15.241 06:50:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:15.241 06:50:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:16:15.241 06:50:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:16:15.241 06:50:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 23b3d985-482b-4eaa-a7b6-7624470d85dc -c nvc0n1p0 --l2p_dram_limit 20 00:16:15.504 [2024-11-18 06:50:08.393615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.504 [2024-11-18 06:50:08.393728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:15.504 [2024-11-18 06:50:08.393749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:15.504 [2024-11-18 06:50:08.393757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.504 [2024-11-18 06:50:08.393796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.504 [2024-11-18 06:50:08.393803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:15.504 [2024-11-18 06:50:08.393813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:16:15.504 [2024-11-18 06:50:08.393819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.504 [2024-11-18 06:50:08.393834] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:15.504 [2024-11-18 06:50:08.394030] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:15.504 [2024-11-18 06:50:08.394043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.504 [2024-11-18 06:50:08.394049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:15.504 [2024-11-18 06:50:08.394059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:16:15.504 [2024-11-18 06:50:08.394068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.504 [2024-11-18 06:50:08.394152] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 651bbe35-ce39-425a-86a8-ab3e9565c798 00:16:15.504 [2024-11-18 06:50:08.395066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.504 [2024-11-18 06:50:08.395093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:15.504 [2024-11-18 06:50:08.395101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:16:15.504 [2024-11-18 06:50:08.395108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.504 [2024-11-18 06:50:08.399631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.504 [2024-11-18 06:50:08.399661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:15.504 [2024-11-18 06:50:08.399669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.495 ms 00:16:15.504 [2024-11-18 06:50:08.399680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.504 [2024-11-18 06:50:08.399730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.504 [2024-11-18 06:50:08.399738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:15.504 [2024-11-18 06:50:08.399745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:16:15.504 [2024-11-18 06:50:08.399753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.504 [2024-11-18 06:50:08.399788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.504 [2024-11-18 06:50:08.399797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:15.504 [2024-11-18 06:50:08.399803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:15.504 [2024-11-18 06:50:08.399810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.504 [2024-11-18 06:50:08.399826] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:15.504 [2024-11-18 06:50:08.401066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.504 [2024-11-18 06:50:08.401089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:15.504 [2024-11-18 06:50:08.401099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.241 ms 00:16:15.504 [2024-11-18 06:50:08.401104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.504 [2024-11-18 06:50:08.401126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.504 [2024-11-18 06:50:08.401133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:15.504 [2024-11-18 06:50:08.401142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:15.504 [2024-11-18 06:50:08.401147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.504 [2024-11-18 06:50:08.401159] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:15.504 [2024-11-18 06:50:08.401264] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:15.504 [2024-11-18 06:50:08.401275] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:15.504 [2024-11-18 06:50:08.401282] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:15.504 [2024-11-18 06:50:08.401291] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:15.504 [2024-11-18 06:50:08.401300] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:15.504 [2024-11-18 06:50:08.401308] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:15.504 [2024-11-18 06:50:08.401314] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:15.504 [2024-11-18 06:50:08.401322] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:15.504 [2024-11-18 06:50:08.401327] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:15.504 [2024-11-18 06:50:08.401336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.504 [2024-11-18 06:50:08.401341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:15.504 [2024-11-18 06:50:08.401348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:16:15.504 [2024-11-18 06:50:08.401354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.504 [2024-11-18 06:50:08.401421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.504 [2024-11-18 06:50:08.401428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:15.504 [2024-11-18 06:50:08.401435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:16:15.504 [2024-11-18 06:50:08.401442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.504 [2024-11-18 06:50:08.401515] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:15.504 [2024-11-18 06:50:08.401523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:15.504 [2024-11-18 06:50:08.401531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:15.504 [2024-11-18 06:50:08.401541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:15.504 [2024-11-18 06:50:08.401548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:15.504 [2024-11-18 06:50:08.401553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:15.504 [2024-11-18 06:50:08.401560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:15.504 [2024-11-18 06:50:08.401565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:15.504 [2024-11-18 06:50:08.401572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:15.504 [2024-11-18 06:50:08.401577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:15.504 [2024-11-18 06:50:08.401583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:15.504 [2024-11-18 06:50:08.401590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:15.504 [2024-11-18 06:50:08.401598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:15.504 [2024-11-18 06:50:08.401603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:15.504 [2024-11-18 06:50:08.401609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:15.504 [2024-11-18 06:50:08.401614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:15.504 [2024-11-18 06:50:08.401622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:15.504 [2024-11-18 06:50:08.401627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:15.504 [2024-11-18 06:50:08.401634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:15.504 [2024-11-18 06:50:08.401639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:15.504 [2024-11-18 06:50:08.401645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:15.504 [2024-11-18 06:50:08.401650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:15.504 [2024-11-18 06:50:08.401657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:15.504 [2024-11-18 06:50:08.401661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:15.504 [2024-11-18 06:50:08.401668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:15.504 [2024-11-18 06:50:08.401672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:15.504 [2024-11-18 06:50:08.401679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:15.504 [2024-11-18 06:50:08.401684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:15.504 [2024-11-18 06:50:08.401691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:15.504 [2024-11-18 06:50:08.401696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:15.504 [2024-11-18 06:50:08.401702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:15.504 [2024-11-18 06:50:08.401707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:15.504 [2024-11-18 06:50:08.401714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:15.504 [2024-11-18 06:50:08.401720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:15.504 [2024-11-18 06:50:08.401728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:15.504 [2024-11-18 06:50:08.401734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:15.504 [2024-11-18 06:50:08.401741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:15.505 [2024-11-18 06:50:08.401746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:15.505 [2024-11-18 06:50:08.401753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:15.505 [2024-11-18 06:50:08.401758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:15.505 [2024-11-18 06:50:08.401765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:15.505 [2024-11-18 06:50:08.401771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:15.505 [2024-11-18 06:50:08.401778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:15.505 [2024-11-18 06:50:08.401784] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:15.505 [2024-11-18 06:50:08.401793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:15.505 [2024-11-18 06:50:08.401799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:15.505 [2024-11-18 06:50:08.401806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:15.505 [2024-11-18 06:50:08.401813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:15.505 [2024-11-18 06:50:08.401821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:15.505 [2024-11-18 06:50:08.401827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:15.505 [2024-11-18 06:50:08.401834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:15.505 [2024-11-18 06:50:08.401840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:15.505 [2024-11-18 06:50:08.401847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:15.505 [2024-11-18 06:50:08.401855] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:15.505 [2024-11-18 06:50:08.401864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:15.505 [2024-11-18 06:50:08.401871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:15.505 [2024-11-18 06:50:08.401879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:15.505 [2024-11-18 06:50:08.401885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:15.505 [2024-11-18 06:50:08.401892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:15.505 [2024-11-18 06:50:08.401898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:15.505 [2024-11-18 06:50:08.401908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:15.505 [2024-11-18 06:50:08.401914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:15.505 [2024-11-18 06:50:08.401922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:15.505 [2024-11-18 06:50:08.401928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:15.505 [2024-11-18 06:50:08.401935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:15.505 [2024-11-18 06:50:08.401942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:15.505 [2024-11-18 06:50:08.401949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:15.505 [2024-11-18 06:50:08.401956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:15.505 [2024-11-18 06:50:08.401963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:15.505 [2024-11-18 06:50:08.401969] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:15.505 [2024-11-18 06:50:08.401991] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:15.505 [2024-11-18 06:50:08.402000] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:15.505 [2024-11-18 06:50:08.402007] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:15.505 [2024-11-18 06:50:08.402014] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:15.505 [2024-11-18 06:50:08.402022] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:15.505 [2024-11-18 06:50:08.402028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.505 [2024-11-18 06:50:08.402038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:15.505 [2024-11-18 06:50:08.402045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:16:15.505 [2024-11-18 06:50:08.402055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.505 [2024-11-18 06:50:08.402079] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:15.505 [2024-11-18 06:50:08.402089] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:19.715 [2024-11-18 06:50:12.331331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.715 [2024-11-18 06:50:12.331605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:19.715 [2024-11-18 06:50:12.331632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3929.208 ms 00:16:19.715 [2024-11-18 06:50:12.331648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.715 [2024-11-18 06:50:12.345355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.715 [2024-11-18 06:50:12.345421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:19.715 [2024-11-18 06:50:12.345436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.589 ms 00:16:19.715 [2024-11-18 06:50:12.345450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.715 [2024-11-18 06:50:12.345577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.715 [2024-11-18 06:50:12.345590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:19.715 [2024-11-18 06:50:12.345600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:16:19.715 [2024-11-18 06:50:12.345613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.715 [2024-11-18 06:50:12.371827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.715 [2024-11-18 06:50:12.372291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:19.715 [2024-11-18 06:50:12.372353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.160 ms 00:16:19.715 [2024-11-18 06:50:12.372382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.715 [2024-11-18 06:50:12.372474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.715 [2024-11-18 06:50:12.372505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:19.715 [2024-11-18 06:50:12.372534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:16:19.715 [2024-11-18 06:50:12.372559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.715 [2024-11-18 06:50:12.373355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.715 [2024-11-18 06:50:12.373441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:19.715 [2024-11-18 06:50:12.373467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.664 ms 00:16:19.715 [2024-11-18 06:50:12.373497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.715 [2024-11-18 06:50:12.373788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.715 [2024-11-18 06:50:12.373817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:19.715 [2024-11-18 06:50:12.373839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:16:19.715 [2024-11-18 06:50:12.373870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.715 [2024-11-18 06:50:12.382191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.715 [2024-11-18 06:50:12.382244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:19.715 [2024-11-18 06:50:12.382256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.282 ms 00:16:19.715 [2024-11-18 06:50:12.382267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.715 [2024-11-18 06:50:12.392349] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:19.715 [2024-11-18 06:50:12.400141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.715 [2024-11-18 06:50:12.400184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:19.715 [2024-11-18 06:50:12.400199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.788 ms 00:16:19.715 [2024-11-18 06:50:12.400206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.715 [2024-11-18 06:50:12.494630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.715 [2024-11-18 06:50:12.494689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:19.715 [2024-11-18 06:50:12.494709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.388 ms 00:16:19.715 [2024-11-18 06:50:12.494718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.715 [2024-11-18 06:50:12.494925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.715 [2024-11-18 06:50:12.494936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:19.715 [2024-11-18 06:50:12.494947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:16:19.715 [2024-11-18 06:50:12.494956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.715 [2024-11-18 06:50:12.501228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.715 [2024-11-18 06:50:12.501287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:19.715 [2024-11-18 06:50:12.501301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.248 ms 00:16:19.715 [2024-11-18 06:50:12.501310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.715 [2024-11-18 06:50:12.506376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.715 [2024-11-18 06:50:12.506568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:19.715 [2024-11-18 06:50:12.506593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.010 ms 00:16:19.715 [2024-11-18 06:50:12.506601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.715 [2024-11-18 06:50:12.506933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.716 [2024-11-18 06:50:12.506944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:19.716 [2024-11-18 06:50:12.506958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:16:19.716 [2024-11-18 06:50:12.506967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.716 [2024-11-18 06:50:12.553631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.716 [2024-11-18 06:50:12.553820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:19.716 [2024-11-18 06:50:12.553846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.612 ms 00:16:19.716 [2024-11-18 06:50:12.553855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.716 [2024-11-18 06:50:12.561228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.716 [2024-11-18 06:50:12.561284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:19.716 [2024-11-18 06:50:12.561298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.277 ms 00:16:19.716 [2024-11-18 06:50:12.561307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.716 [2024-11-18 06:50:12.567040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.716 [2024-11-18 06:50:12.567088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:19.716 [2024-11-18 06:50:12.567101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.682 ms 00:16:19.716 [2024-11-18 06:50:12.567108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.716 [2024-11-18 06:50:12.573324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.716 [2024-11-18 06:50:12.573373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:19.716 [2024-11-18 06:50:12.573389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.164 ms 00:16:19.716 [2024-11-18 06:50:12.573397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.716 [2024-11-18 06:50:12.573451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.716 [2024-11-18 06:50:12.573461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:19.716 [2024-11-18 06:50:12.573483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:19.716 [2024-11-18 06:50:12.573491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.716 [2024-11-18 06:50:12.573566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.716 [2024-11-18 06:50:12.573576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:19.716 [2024-11-18 06:50:12.573586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:16:19.716 [2024-11-18 06:50:12.573594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.716 [2024-11-18 06:50:12.574729] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4180.593 ms, result 0 00:16:19.716 { 00:16:19.716 "name": "ftl0", 00:16:19.716 "uuid": "651bbe35-ce39-425a-86a8-ab3e9565c798" 00:16:19.716 } 00:16:19.716 06:50:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:19.716 06:50:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:19.716 06:50:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:19.976 06:50:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:19.976 [2024-11-18 06:50:12.912907] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:19.976 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:19.976 Zero copy mechanism will not be used. 00:16:19.976 Running I/O for 4 seconds... 00:16:21.863 879.00 IOPS, 58.37 MiB/s [2024-11-18T06:50:16.338Z] 877.00 IOPS, 58.24 MiB/s [2024-11-18T06:50:17.281Z] 1334.00 IOPS, 88.59 MiB/s 00:16:24.194 Latency(us) 00:16:24.194 [2024-11-18T06:50:17.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.194 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:24.195 ftl0 : 4.00 1363.21 90.53 0.00 0.00 772.95 125.24 3428.04 00:16:24.195 [2024-11-18T06:50:17.282Z] =================================================================================================================== 00:16:24.195 [2024-11-18T06:50:17.282Z] Total : 1363.21 90.53 0.00 0.00 772.95 125.24 3428.04 00:16:24.195 [2024-11-18 06:50:16.920289] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:24.195 { 00:16:24.195 "results": [ 00:16:24.195 { 00:16:24.195 "job": "ftl0", 00:16:24.195 "core_mask": "0x1", 00:16:24.195 "workload": "randwrite", 00:16:24.195 "status": "finished", 00:16:24.195 "queue_depth": 1, 00:16:24.195 "io_size": 69632, 00:16:24.195 "runtime": 4.000124, 00:16:24.195 "iops": 1363.2077405600426, 00:16:24.195 "mibps": 90.52551402156533, 00:16:24.195 "io_failed": 0, 00:16:24.195 "io_timeout": 0, 00:16:24.195 "avg_latency_us": 772.9454307438389, 00:16:24.195 "min_latency_us": 125.24307692307693, 00:16:24.195 "max_latency_us": 3428.036923076923 00:16:24.195 } 00:16:24.195 ], 00:16:24.195 "core_count": 1 00:16:24.195 } 00:16:24.195 06:50:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:24.195 [2024-11-18 06:50:17.039579] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:24.195 Running I/O for 4 seconds... 00:16:26.084 5942.00 IOPS, 23.21 MiB/s [2024-11-18T06:50:20.115Z] 6951.00 IOPS, 27.15 MiB/s [2024-11-18T06:50:21.060Z] 6329.67 IOPS, 24.73 MiB/s [2024-11-18T06:50:21.322Z] 6048.50 IOPS, 23.63 MiB/s 00:16:28.235 Latency(us) 00:16:28.235 [2024-11-18T06:50:21.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.235 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:28.235 ftl0 : 4.03 6027.53 23.55 0.00 0.00 21144.38 244.18 48194.17 00:16:28.235 [2024-11-18T06:50:21.322Z] =================================================================================================================== 00:16:28.235 [2024-11-18T06:50:21.322Z] Total : 6027.53 23.55 0.00 0.00 21144.38 0.00 48194.17 00:16:28.235 [2024-11-18 06:50:21.081315] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:28.235 { 00:16:28.235 "results": [ 00:16:28.235 { 00:16:28.235 "job": "ftl0", 00:16:28.235 "core_mask": "0x1", 00:16:28.235 "workload": "randwrite", 00:16:28.235 "status": "finished", 00:16:28.235 "queue_depth": 128, 00:16:28.235 "io_size": 4096, 00:16:28.235 "runtime": 4.033494, 00:16:28.235 "iops": 6027.528490187416, 00:16:28.235 "mibps": 23.545033164794592, 00:16:28.235 "io_failed": 0, 00:16:28.235 "io_timeout": 0, 00:16:28.235 "avg_latency_us": 21144.376407218973, 00:16:28.235 "min_latency_us": 244.1846153846154, 00:16:28.235 "max_latency_us": 48194.166153846156 00:16:28.235 } 00:16:28.235 ], 00:16:28.235 "core_count": 1 00:16:28.235 } 00:16:28.235 06:50:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:28.235 [2024-11-18 06:50:21.194858] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:28.235 Running I/O for 4 seconds... 00:16:30.121 4366.00 IOPS, 17.05 MiB/s [2024-11-18T06:50:24.599Z] 4463.00 IOPS, 17.43 MiB/s [2024-11-18T06:50:25.543Z] 4470.33 IOPS, 17.46 MiB/s [2024-11-18T06:50:25.543Z] 4465.00 IOPS, 17.44 MiB/s 00:16:32.456 Latency(us) 00:16:32.456 [2024-11-18T06:50:25.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.456 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:32.456 Verification LBA range: start 0x0 length 0x1400000 00:16:32.456 ftl0 : 4.02 4478.66 17.49 0.00 0.00 28495.90 412.75 37506.76 00:16:32.456 [2024-11-18T06:50:25.543Z] =================================================================================================================== 00:16:32.456 [2024-11-18T06:50:25.543Z] Total : 4478.66 17.49 0.00 0.00 28495.90 0.00 37506.76 00:16:32.456 [2024-11-18 06:50:25.221183] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:32.456 { 00:16:32.456 "results": [ 00:16:32.456 { 00:16:32.456 "job": "ftl0", 00:16:32.456 "core_mask": "0x1", 00:16:32.456 "workload": "verify", 00:16:32.456 "status": "finished", 00:16:32.456 "verify_range": { 00:16:32.456 "start": 0, 00:16:32.456 "length": 20971520 00:16:32.456 }, 00:16:32.456 "queue_depth": 128, 00:16:32.456 "io_size": 4096, 00:16:32.456 "runtime": 4.016378, 00:16:32.456 "iops": 4478.662117958021, 00:16:32.456 "mibps": 17.494773898273518, 00:16:32.457 "io_failed": 0, 00:16:32.457 "io_timeout": 0, 00:16:32.457 "avg_latency_us": 28495.904351961137, 00:16:32.457 "min_latency_us": 412.7507692307692, 00:16:32.457 "max_latency_us": 37506.75692307692 00:16:32.457 } 00:16:32.457 ], 00:16:32.457 "core_count": 1 00:16:32.457 } 00:16:32.457 06:50:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:32.457 [2024-11-18 06:50:25.437337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.457 [2024-11-18 06:50:25.437550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:32.457 [2024-11-18 06:50:25.437755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:32.457 [2024-11-18 06:50:25.437785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.457 [2024-11-18 06:50:25.437838] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:32.457 [2024-11-18 06:50:25.438597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.457 [2024-11-18 06:50:25.438770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:32.457 [2024-11-18 06:50:25.438875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.718 ms 00:16:32.457 [2024-11-18 06:50:25.438906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.457 [2024-11-18 06:50:25.441732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.457 [2024-11-18 06:50:25.441883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:32.457 [2024-11-18 06:50:25.441950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.780 ms 00:16:32.457 [2024-11-18 06:50:25.441970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.719 [2024-11-18 06:50:25.662454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.719 [2024-11-18 06:50:25.662517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:32.719 [2024-11-18 06:50:25.662532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 220.429 ms 00:16:32.719 [2024-11-18 06:50:25.662547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.719 [2024-11-18 06:50:25.668851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.719 [2024-11-18 06:50:25.669037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:32.719 [2024-11-18 06:50:25.669057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.262 ms 00:16:32.719 [2024-11-18 06:50:25.669068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.719 [2024-11-18 06:50:25.671952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.719 [2024-11-18 06:50:25.672027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:32.719 [2024-11-18 06:50:25.672037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.811 ms 00:16:32.719 [2024-11-18 06:50:25.672048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.719 [2024-11-18 06:50:25.677958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.719 [2024-11-18 06:50:25.678031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:32.719 [2024-11-18 06:50:25.678042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.866 ms 00:16:32.719 [2024-11-18 06:50:25.678056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.719 [2024-11-18 06:50:25.678179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.719 [2024-11-18 06:50:25.678195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:32.719 [2024-11-18 06:50:25.678205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:16:32.719 [2024-11-18 06:50:25.678215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.719 [2024-11-18 06:50:25.681430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.719 [2024-11-18 06:50:25.681484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:32.719 [2024-11-18 06:50:25.681493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.198 ms 00:16:32.719 [2024-11-18 06:50:25.681503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.719 [2024-11-18 06:50:25.684046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.719 [2024-11-18 06:50:25.684096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:32.719 [2024-11-18 06:50:25.684107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.502 ms 00:16:32.719 [2024-11-18 06:50:25.684116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.719 [2024-11-18 06:50:25.686505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.719 [2024-11-18 06:50:25.686559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:32.719 [2024-11-18 06:50:25.686569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.348 ms 00:16:32.719 [2024-11-18 06:50:25.686583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.719 [2024-11-18 06:50:25.688809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.719 [2024-11-18 06:50:25.688860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:32.719 [2024-11-18 06:50:25.688870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.161 ms 00:16:32.719 [2024-11-18 06:50:25.688879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.719 [2024-11-18 06:50:25.688918] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:32.719 [2024-11-18 06:50:25.688936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:32.719 [2024-11-18 06:50:25.688946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:32.719 [2024-11-18 06:50:25.688957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:32.719 [2024-11-18 06:50:25.688965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:32.719 [2024-11-18 06:50:25.688993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:32.719 [2024-11-18 06:50:25.689002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:32.719 [2024-11-18 06:50:25.689015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:32.719 [2024-11-18 06:50:25.689023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:32.719 [2024-11-18 06:50:25.689034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:32.719 [2024-11-18 06:50:25.689042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:32.719 [2024-11-18 06:50:25.689055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:32.719 [2024-11-18 06:50:25.689062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:32.719 [2024-11-18 06:50:25.689072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:32.719 [2024-11-18 06:50:25.689079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:32.719 [2024-11-18 06:50:25.689089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:32.719 [2024-11-18 06:50:25.689097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:32.719 [2024-11-18 06:50:25.689106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:32.719 [2024-11-18 06:50:25.689114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:32.719 [2024-11-18 06:50:25.689123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:32.719 [2024-11-18 06:50:25.689131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:32.719 [2024-11-18 06:50:25.689140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:32.720 [2024-11-18 06:50:25.689889] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:32.720 [2024-11-18 06:50:25.689900] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 651bbe35-ce39-425a-86a8-ab3e9565c798 00:16:32.720 [2024-11-18 06:50:25.689910] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:32.720 [2024-11-18 06:50:25.689917] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:32.720 [2024-11-18 06:50:25.689926] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:32.720 [2024-11-18 06:50:25.689933] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:32.720 [2024-11-18 06:50:25.689944] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:32.720 [2024-11-18 06:50:25.689953] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:32.720 [2024-11-18 06:50:25.689964] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:32.720 [2024-11-18 06:50:25.689970] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:32.720 [2024-11-18 06:50:25.689991] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:32.720 [2024-11-18 06:50:25.689998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.720 [2024-11-18 06:50:25.690007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:32.721 [2024-11-18 06:50:25.690024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.082 ms 00:16:32.721 [2024-11-18 06:50:25.690033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.721 [2024-11-18 06:50:25.692338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.721 [2024-11-18 06:50:25.692375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:32.721 [2024-11-18 06:50:25.692390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.283 ms 00:16:32.721 [2024-11-18 06:50:25.692400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.721 [2024-11-18 06:50:25.692538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.721 [2024-11-18 06:50:25.692555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:32.721 [2024-11-18 06:50:25.692566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:16:32.721 [2024-11-18 06:50:25.692578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.721 [2024-11-18 06:50:25.700219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.721 [2024-11-18 06:50:25.700419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:32.721 [2024-11-18 06:50:25.700437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.721 [2024-11-18 06:50:25.700448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.721 [2024-11-18 06:50:25.700515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.721 [2024-11-18 06:50:25.700526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:32.721 [2024-11-18 06:50:25.700537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.721 [2024-11-18 06:50:25.700547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.721 [2024-11-18 06:50:25.700632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.721 [2024-11-18 06:50:25.700646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:32.721 [2024-11-18 06:50:25.700655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.721 [2024-11-18 06:50:25.700665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.721 [2024-11-18 06:50:25.700679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.721 [2024-11-18 06:50:25.700690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:32.721 [2024-11-18 06:50:25.700698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.721 [2024-11-18 06:50:25.700713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.721 [2024-11-18 06:50:25.713561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.721 [2024-11-18 06:50:25.713614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:32.721 [2024-11-18 06:50:25.713625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.721 [2024-11-18 06:50:25.713635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.721 [2024-11-18 06:50:25.724171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.721 [2024-11-18 06:50:25.724223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:32.721 [2024-11-18 06:50:25.724234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.721 [2024-11-18 06:50:25.724248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.721 [2024-11-18 06:50:25.724321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.721 [2024-11-18 06:50:25.724334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:32.721 [2024-11-18 06:50:25.724342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.721 [2024-11-18 06:50:25.724352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.721 [2024-11-18 06:50:25.724395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.721 [2024-11-18 06:50:25.724406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:32.721 [2024-11-18 06:50:25.724414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.721 [2024-11-18 06:50:25.724427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.721 [2024-11-18 06:50:25.724505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.721 [2024-11-18 06:50:25.724517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:32.721 [2024-11-18 06:50:25.724525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.721 [2024-11-18 06:50:25.724535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.721 [2024-11-18 06:50:25.724563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.721 [2024-11-18 06:50:25.724575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:32.721 [2024-11-18 06:50:25.724584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.721 [2024-11-18 06:50:25.724593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.721 [2024-11-18 06:50:25.724637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.721 [2024-11-18 06:50:25.724648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:32.721 [2024-11-18 06:50:25.724656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.721 [2024-11-18 06:50:25.724666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.721 [2024-11-18 06:50:25.724708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.721 [2024-11-18 06:50:25.724720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:32.721 [2024-11-18 06:50:25.724729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.721 [2024-11-18 06:50:25.724741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.721 [2024-11-18 06:50:25.724880] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 287.503 ms, result 0 00:16:32.721 true 00:16:32.721 06:50:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 84635 00:16:32.721 06:50:25 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 84635 ']' 00:16:32.721 06:50:25 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 84635 00:16:32.721 06:50:25 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:16:32.721 06:50:25 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:32.721 06:50:25 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84635 00:16:32.721 killing process with pid 84635 00:16:32.721 Received shutdown signal, test time was about 4.000000 seconds 00:16:32.721 00:16:32.721 Latency(us) 00:16:32.721 [2024-11-18T06:50:25.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.721 [2024-11-18T06:50:25.808Z] =================================================================================================================== 00:16:32.721 [2024-11-18T06:50:25.808Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:32.721 06:50:25 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:32.721 06:50:25 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:32.721 06:50:25 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84635' 00:16:32.721 06:50:25 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 84635 00:16:32.721 06:50:25 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 84635 00:16:38.071 Remove shared memory files 00:16:38.071 06:50:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:38.071 06:50:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:16:38.071 06:50:30 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:38.071 06:50:30 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:38.071 06:50:30 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:38.071 06:50:30 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:38.071 06:50:30 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:38.071 06:50:30 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:38.071 ************************************ 00:16:38.071 END TEST ftl_bdevperf 00:16:38.071 ************************************ 00:16:38.071 00:16:38.071 real 0m25.840s 00:16:38.071 user 0m28.460s 00:16:38.071 sys 0m1.011s 00:16:38.071 06:50:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:38.071 06:50:30 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:38.071 06:50:30 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:38.071 06:50:30 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:38.071 06:50:30 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:38.071 06:50:30 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:38.071 ************************************ 00:16:38.071 START TEST ftl_trim 00:16:38.071 ************************************ 00:16:38.071 06:50:30 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:38.071 * Looking for test storage... 00:16:38.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:38.071 06:50:30 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:38.071 06:50:30 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:16:38.071 06:50:30 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:38.071 06:50:30 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:38.071 06:50:30 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:38.071 06:50:30 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:38.071 06:50:30 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:38.071 06:50:30 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:16:38.071 06:50:30 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:16:38.071 06:50:30 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:16:38.071 06:50:30 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:16:38.071 06:50:30 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:16:38.071 06:50:30 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:16:38.071 06:50:30 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:16:38.071 06:50:30 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:38.071 06:50:30 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:16:38.071 06:50:30 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:16:38.071 06:50:30 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:38.071 06:50:30 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:38.071 06:50:30 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:16:38.071 06:50:30 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:16:38.071 06:50:30 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:38.071 06:50:30 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:16:38.071 06:50:30 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:16:38.071 06:50:30 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:16:38.071 06:50:30 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:16:38.071 06:50:30 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:38.071 06:50:30 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:16:38.071 06:50:30 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:16:38.071 06:50:30 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:38.071 06:50:30 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:38.071 06:50:30 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:16:38.071 06:50:30 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:38.071 06:50:30 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:38.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.071 --rc genhtml_branch_coverage=1 00:16:38.071 --rc genhtml_function_coverage=1 00:16:38.071 --rc genhtml_legend=1 00:16:38.071 --rc geninfo_all_blocks=1 00:16:38.071 --rc geninfo_unexecuted_blocks=1 00:16:38.071 00:16:38.071 ' 00:16:38.071 06:50:30 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:38.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.071 --rc genhtml_branch_coverage=1 00:16:38.071 --rc genhtml_function_coverage=1 00:16:38.071 --rc genhtml_legend=1 00:16:38.071 --rc geninfo_all_blocks=1 00:16:38.071 --rc geninfo_unexecuted_blocks=1 00:16:38.071 00:16:38.071 ' 00:16:38.071 06:50:30 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:38.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.071 --rc genhtml_branch_coverage=1 00:16:38.071 --rc genhtml_function_coverage=1 00:16:38.071 --rc genhtml_legend=1 00:16:38.071 --rc geninfo_all_blocks=1 00:16:38.071 --rc geninfo_unexecuted_blocks=1 00:16:38.071 00:16:38.071 ' 00:16:38.071 06:50:30 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:38.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.071 --rc genhtml_branch_coverage=1 00:16:38.071 --rc genhtml_function_coverage=1 00:16:38.071 --rc genhtml_legend=1 00:16:38.071 --rc geninfo_all_blocks=1 00:16:38.071 --rc geninfo_unexecuted_blocks=1 00:16:38.071 00:16:38.071 ' 00:16:38.071 06:50:30 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:38.071 06:50:30 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:38.071 06:50:30 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:38.071 06:50:30 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:38.071 06:50:30 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:38.071 06:50:30 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:38.071 06:50:30 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:38.071 06:50:30 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:38.071 06:50:30 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:38.071 06:50:30 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:38.072 06:50:30 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:38.072 06:50:30 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:38.072 06:50:30 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:38.072 06:50:30 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:38.072 06:50:30 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:38.072 06:50:30 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:38.072 06:50:30 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:38.072 06:50:30 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:38.072 06:50:30 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:38.072 06:50:30 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:38.072 06:50:30 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:38.072 06:50:30 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:38.072 06:50:30 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:38.072 06:50:30 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:38.072 06:50:30 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:38.072 06:50:30 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:38.072 06:50:30 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:38.072 06:50:30 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:38.072 06:50:30 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:38.072 06:50:30 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:38.072 06:50:30 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:38.072 06:50:30 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:38.072 06:50:30 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:38.072 06:50:30 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:38.072 06:50:30 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:38.072 06:50:30 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:38.072 06:50:30 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:38.072 06:50:30 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:38.072 06:50:30 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:38.072 06:50:30 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:38.072 06:50:30 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:38.072 06:50:30 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=84988 00:16:38.072 06:50:30 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 84988 00:16:38.072 06:50:30 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:38.072 06:50:30 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 84988 ']' 00:16:38.072 06:50:30 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.072 06:50:30 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:38.072 06:50:30 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.072 06:50:30 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:38.072 06:50:30 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:38.072 [2024-11-18 06:50:30.506263] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:16:38.072 [2024-11-18 06:50:30.506649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84988 ] 00:16:38.072 [2024-11-18 06:50:30.672043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:38.072 [2024-11-18 06:50:30.704182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:38.072 [2024-11-18 06:50:30.704490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:38.072 [2024-11-18 06:50:30.704525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.334 06:50:31 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:38.334 06:50:31 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:16:38.334 06:50:31 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:38.334 06:50:31 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:38.334 06:50:31 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:38.334 06:50:31 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:38.334 06:50:31 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:38.334 06:50:31 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:38.595 06:50:31 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:38.595 06:50:31 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:38.595 06:50:31 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:38.595 06:50:31 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:16:38.595 06:50:31 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:38.595 06:50:31 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:38.595 06:50:31 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:38.595 06:50:31 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:38.857 06:50:31 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:38.857 { 00:16:38.857 "name": "nvme0n1", 00:16:38.857 "aliases": [ 00:16:38.857 "d8a42270-c415-4b7e-84a2-6d861da07de7" 00:16:38.857 ], 00:16:38.857 "product_name": "NVMe disk", 00:16:38.857 "block_size": 4096, 00:16:38.857 "num_blocks": 1310720, 00:16:38.857 "uuid": "d8a42270-c415-4b7e-84a2-6d861da07de7", 00:16:38.857 "numa_id": -1, 00:16:38.857 "assigned_rate_limits": { 00:16:38.857 "rw_ios_per_sec": 0, 00:16:38.857 "rw_mbytes_per_sec": 0, 00:16:38.857 "r_mbytes_per_sec": 0, 00:16:38.857 "w_mbytes_per_sec": 0 00:16:38.857 }, 00:16:38.857 "claimed": true, 00:16:38.857 "claim_type": "read_many_write_one", 00:16:38.857 "zoned": false, 00:16:38.857 "supported_io_types": { 00:16:38.857 "read": true, 00:16:38.857 "write": true, 00:16:38.857 "unmap": true, 00:16:38.857 "flush": true, 00:16:38.857 "reset": true, 00:16:38.857 "nvme_admin": true, 00:16:38.857 "nvme_io": true, 00:16:38.857 "nvme_io_md": false, 00:16:38.857 "write_zeroes": true, 00:16:38.857 "zcopy": false, 00:16:38.857 "get_zone_info": false, 00:16:38.857 "zone_management": false, 00:16:38.857 "zone_append": false, 00:16:38.857 "compare": true, 00:16:38.857 "compare_and_write": false, 00:16:38.857 "abort": true, 00:16:38.857 "seek_hole": false, 00:16:38.857 "seek_data": false, 00:16:38.857 "copy": true, 00:16:38.857 "nvme_iov_md": false 00:16:38.857 }, 00:16:38.857 "driver_specific": { 00:16:38.857 "nvme": [ 00:16:38.857 { 00:16:38.857 "pci_address": "0000:00:11.0", 00:16:38.857 "trid": { 00:16:38.857 "trtype": "PCIe", 00:16:38.857 "traddr": "0000:00:11.0" 00:16:38.857 }, 00:16:38.857 "ctrlr_data": { 00:16:38.857 "cntlid": 0, 00:16:38.857 "vendor_id": "0x1b36", 00:16:38.857 "model_number": "QEMU NVMe Ctrl", 00:16:38.857 "serial_number": "12341", 00:16:38.857 "firmware_revision": "8.0.0", 00:16:38.857 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:38.857 "oacs": { 00:16:38.857 "security": 0, 00:16:38.857 "format": 1, 00:16:38.857 "firmware": 0, 00:16:38.857 "ns_manage": 1 00:16:38.857 }, 00:16:38.857 "multi_ctrlr": false, 00:16:38.857 "ana_reporting": false 00:16:38.857 }, 00:16:38.857 "vs": { 00:16:38.857 "nvme_version": "1.4" 00:16:38.857 }, 00:16:38.857 "ns_data": { 00:16:38.857 "id": 1, 00:16:38.857 "can_share": false 00:16:38.857 } 00:16:38.857 } 00:16:38.857 ], 00:16:38.857 "mp_policy": "active_passive" 00:16:38.857 } 00:16:38.857 } 00:16:38.857 ]' 00:16:38.857 06:50:31 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:38.857 06:50:31 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:38.857 06:50:31 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:39.118 06:50:31 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:16:39.118 06:50:31 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:16:39.118 06:50:31 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:16:39.118 06:50:31 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:39.118 06:50:31 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:39.118 06:50:31 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:39.118 06:50:31 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:39.118 06:50:31 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:39.379 06:50:32 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=33e5fa32-d8d5-4074-a632-d7f198c55eff 00:16:39.379 06:50:32 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:39.379 06:50:32 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 33e5fa32-d8d5-4074-a632-d7f198c55eff 00:16:39.641 06:50:32 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:39.641 06:50:32 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=f1992292-5c42-448e-b968-a4ecf9ae834a 00:16:39.641 06:50:32 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f1992292-5c42-448e-b968-a4ecf9ae834a 00:16:39.902 06:50:32 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=f45ad49b-62d2-4991-871a-3fd51ecec48c 00:16:39.902 06:50:32 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f45ad49b-62d2-4991-871a-3fd51ecec48c 00:16:39.902 06:50:32 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:16:39.902 06:50:32 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:39.902 06:50:32 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=f45ad49b-62d2-4991-871a-3fd51ecec48c 00:16:39.902 06:50:32 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:16:39.902 06:50:32 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size f45ad49b-62d2-4991-871a-3fd51ecec48c 00:16:39.902 06:50:32 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=f45ad49b-62d2-4991-871a-3fd51ecec48c 00:16:39.902 06:50:32 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:39.902 06:50:32 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:39.902 06:50:32 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:39.902 06:50:32 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f45ad49b-62d2-4991-871a-3fd51ecec48c 00:16:40.164 06:50:33 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:40.164 { 00:16:40.164 "name": "f45ad49b-62d2-4991-871a-3fd51ecec48c", 00:16:40.164 "aliases": [ 00:16:40.164 "lvs/nvme0n1p0" 00:16:40.164 ], 00:16:40.164 "product_name": "Logical Volume", 00:16:40.164 "block_size": 4096, 00:16:40.164 "num_blocks": 26476544, 00:16:40.164 "uuid": "f45ad49b-62d2-4991-871a-3fd51ecec48c", 00:16:40.164 "assigned_rate_limits": { 00:16:40.164 "rw_ios_per_sec": 0, 00:16:40.164 "rw_mbytes_per_sec": 0, 00:16:40.164 "r_mbytes_per_sec": 0, 00:16:40.164 "w_mbytes_per_sec": 0 00:16:40.164 }, 00:16:40.164 "claimed": false, 00:16:40.164 "zoned": false, 00:16:40.164 "supported_io_types": { 00:16:40.164 "read": true, 00:16:40.164 "write": true, 00:16:40.164 "unmap": true, 00:16:40.164 "flush": false, 00:16:40.164 "reset": true, 00:16:40.164 "nvme_admin": false, 00:16:40.164 "nvme_io": false, 00:16:40.164 "nvme_io_md": false, 00:16:40.164 "write_zeroes": true, 00:16:40.164 "zcopy": false, 00:16:40.164 "get_zone_info": false, 00:16:40.164 "zone_management": false, 00:16:40.164 "zone_append": false, 00:16:40.164 "compare": false, 00:16:40.164 "compare_and_write": false, 00:16:40.164 "abort": false, 00:16:40.164 "seek_hole": true, 00:16:40.164 "seek_data": true, 00:16:40.164 "copy": false, 00:16:40.164 "nvme_iov_md": false 00:16:40.164 }, 00:16:40.164 "driver_specific": { 00:16:40.164 "lvol": { 00:16:40.164 "lvol_store_uuid": "f1992292-5c42-448e-b968-a4ecf9ae834a", 00:16:40.164 "base_bdev": "nvme0n1", 00:16:40.164 "thin_provision": true, 00:16:40.164 "num_allocated_clusters": 0, 00:16:40.164 "snapshot": false, 00:16:40.164 "clone": false, 00:16:40.164 "esnap_clone": false 00:16:40.164 } 00:16:40.164 } 00:16:40.164 } 00:16:40.164 ]' 00:16:40.164 06:50:33 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:40.164 06:50:33 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:40.164 06:50:33 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:40.164 06:50:33 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:40.164 06:50:33 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:40.164 06:50:33 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:16:40.164 06:50:33 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:40.164 06:50:33 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:40.164 06:50:33 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:40.426 06:50:33 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:40.426 06:50:33 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:40.426 06:50:33 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size f45ad49b-62d2-4991-871a-3fd51ecec48c 00:16:40.426 06:50:33 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=f45ad49b-62d2-4991-871a-3fd51ecec48c 00:16:40.426 06:50:33 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:40.426 06:50:33 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:40.426 06:50:33 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:40.426 06:50:33 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f45ad49b-62d2-4991-871a-3fd51ecec48c 00:16:40.689 06:50:33 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:40.689 { 00:16:40.689 "name": "f45ad49b-62d2-4991-871a-3fd51ecec48c", 00:16:40.689 "aliases": [ 00:16:40.689 "lvs/nvme0n1p0" 00:16:40.689 ], 00:16:40.689 "product_name": "Logical Volume", 00:16:40.689 "block_size": 4096, 00:16:40.689 "num_blocks": 26476544, 00:16:40.689 "uuid": "f45ad49b-62d2-4991-871a-3fd51ecec48c", 00:16:40.689 "assigned_rate_limits": { 00:16:40.689 "rw_ios_per_sec": 0, 00:16:40.689 "rw_mbytes_per_sec": 0, 00:16:40.689 "r_mbytes_per_sec": 0, 00:16:40.689 "w_mbytes_per_sec": 0 00:16:40.689 }, 00:16:40.689 "claimed": false, 00:16:40.689 "zoned": false, 00:16:40.689 "supported_io_types": { 00:16:40.689 "read": true, 00:16:40.689 "write": true, 00:16:40.689 "unmap": true, 00:16:40.689 "flush": false, 00:16:40.689 "reset": true, 00:16:40.689 "nvme_admin": false, 00:16:40.689 "nvme_io": false, 00:16:40.689 "nvme_io_md": false, 00:16:40.689 "write_zeroes": true, 00:16:40.689 "zcopy": false, 00:16:40.689 "get_zone_info": false, 00:16:40.689 "zone_management": false, 00:16:40.689 "zone_append": false, 00:16:40.689 "compare": false, 00:16:40.689 "compare_and_write": false, 00:16:40.689 "abort": false, 00:16:40.689 "seek_hole": true, 00:16:40.689 "seek_data": true, 00:16:40.689 "copy": false, 00:16:40.689 "nvme_iov_md": false 00:16:40.689 }, 00:16:40.689 "driver_specific": { 00:16:40.689 "lvol": { 00:16:40.689 "lvol_store_uuid": "f1992292-5c42-448e-b968-a4ecf9ae834a", 00:16:40.689 "base_bdev": "nvme0n1", 00:16:40.689 "thin_provision": true, 00:16:40.689 "num_allocated_clusters": 0, 00:16:40.689 "snapshot": false, 00:16:40.689 "clone": false, 00:16:40.689 "esnap_clone": false 00:16:40.689 } 00:16:40.689 } 00:16:40.689 } 00:16:40.689 ]' 00:16:40.689 06:50:33 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:40.689 06:50:33 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:40.689 06:50:33 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:40.951 06:50:33 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:40.951 06:50:33 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:40.951 06:50:33 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:16:40.951 06:50:33 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:40.951 06:50:33 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:40.951 06:50:33 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:40.951 06:50:33 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:40.951 06:50:33 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size f45ad49b-62d2-4991-871a-3fd51ecec48c 00:16:40.951 06:50:33 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=f45ad49b-62d2-4991-871a-3fd51ecec48c 00:16:40.951 06:50:33 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:40.951 06:50:33 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:40.951 06:50:33 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:40.951 06:50:33 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f45ad49b-62d2-4991-871a-3fd51ecec48c 00:16:41.212 06:50:34 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:41.212 { 00:16:41.212 "name": "f45ad49b-62d2-4991-871a-3fd51ecec48c", 00:16:41.212 "aliases": [ 00:16:41.212 "lvs/nvme0n1p0" 00:16:41.212 ], 00:16:41.212 "product_name": "Logical Volume", 00:16:41.212 "block_size": 4096, 00:16:41.212 "num_blocks": 26476544, 00:16:41.212 "uuid": "f45ad49b-62d2-4991-871a-3fd51ecec48c", 00:16:41.212 "assigned_rate_limits": { 00:16:41.212 "rw_ios_per_sec": 0, 00:16:41.212 "rw_mbytes_per_sec": 0, 00:16:41.212 "r_mbytes_per_sec": 0, 00:16:41.212 "w_mbytes_per_sec": 0 00:16:41.212 }, 00:16:41.212 "claimed": false, 00:16:41.212 "zoned": false, 00:16:41.212 "supported_io_types": { 00:16:41.212 "read": true, 00:16:41.212 "write": true, 00:16:41.212 "unmap": true, 00:16:41.212 "flush": false, 00:16:41.212 "reset": true, 00:16:41.212 "nvme_admin": false, 00:16:41.212 "nvme_io": false, 00:16:41.212 "nvme_io_md": false, 00:16:41.212 "write_zeroes": true, 00:16:41.212 "zcopy": false, 00:16:41.212 "get_zone_info": false, 00:16:41.212 "zone_management": false, 00:16:41.212 "zone_append": false, 00:16:41.212 "compare": false, 00:16:41.212 "compare_and_write": false, 00:16:41.212 "abort": false, 00:16:41.212 "seek_hole": true, 00:16:41.212 "seek_data": true, 00:16:41.212 "copy": false, 00:16:41.212 "nvme_iov_md": false 00:16:41.212 }, 00:16:41.212 "driver_specific": { 00:16:41.212 "lvol": { 00:16:41.212 "lvol_store_uuid": "f1992292-5c42-448e-b968-a4ecf9ae834a", 00:16:41.212 "base_bdev": "nvme0n1", 00:16:41.212 "thin_provision": true, 00:16:41.212 "num_allocated_clusters": 0, 00:16:41.212 "snapshot": false, 00:16:41.212 "clone": false, 00:16:41.212 "esnap_clone": false 00:16:41.212 } 00:16:41.212 } 00:16:41.212 } 00:16:41.212 ]' 00:16:41.212 06:50:34 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:41.212 06:50:34 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:41.212 06:50:34 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:41.212 06:50:34 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:41.212 06:50:34 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:41.212 06:50:34 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:16:41.212 06:50:34 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:41.212 06:50:34 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f45ad49b-62d2-4991-871a-3fd51ecec48c -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:41.474 [2024-11-18 06:50:34.405679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.474 [2024-11-18 06:50:34.405720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:41.474 [2024-11-18 06:50:34.405731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:41.474 [2024-11-18 06:50:34.405741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.474 [2024-11-18 06:50:34.407667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.474 [2024-11-18 06:50:34.407699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:41.474 [2024-11-18 06:50:34.407707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.905 ms 00:16:41.474 [2024-11-18 06:50:34.407715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.474 [2024-11-18 06:50:34.407884] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:41.474 [2024-11-18 06:50:34.408085] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:41.474 [2024-11-18 06:50:34.408102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.474 [2024-11-18 06:50:34.408119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:41.474 [2024-11-18 06:50:34.408130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:16:41.474 [2024-11-18 06:50:34.408143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.474 [2024-11-18 06:50:34.408385] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ec0999a9-9a7d-450e-b3a6-a004ddc4ed37 00:16:41.474 [2024-11-18 06:50:34.409336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.474 [2024-11-18 06:50:34.409369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:41.475 [2024-11-18 06:50:34.409379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:16:41.475 [2024-11-18 06:50:34.409392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.475 [2024-11-18 06:50:34.414262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.475 [2024-11-18 06:50:34.414286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:41.475 [2024-11-18 06:50:34.414295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.800 ms 00:16:41.475 [2024-11-18 06:50:34.414313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.475 [2024-11-18 06:50:34.414398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.475 [2024-11-18 06:50:34.414414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:41.475 [2024-11-18 06:50:34.414422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:16:41.475 [2024-11-18 06:50:34.414429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.475 [2024-11-18 06:50:34.414471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.475 [2024-11-18 06:50:34.414485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:41.475 [2024-11-18 06:50:34.414493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:41.475 [2024-11-18 06:50:34.414505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.475 [2024-11-18 06:50:34.414530] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:41.475 [2024-11-18 06:50:34.415800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.475 [2024-11-18 06:50:34.415837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:41.475 [2024-11-18 06:50:34.415845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.276 ms 00:16:41.475 [2024-11-18 06:50:34.415853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.475 [2024-11-18 06:50:34.415899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.475 [2024-11-18 06:50:34.415907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:41.475 [2024-11-18 06:50:34.415913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:41.475 [2024-11-18 06:50:34.415922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.475 [2024-11-18 06:50:34.415948] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:41.475 [2024-11-18 06:50:34.416075] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:41.475 [2024-11-18 06:50:34.416085] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:41.475 [2024-11-18 06:50:34.416096] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:41.475 [2024-11-18 06:50:34.416104] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:41.475 [2024-11-18 06:50:34.416113] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:41.475 [2024-11-18 06:50:34.416119] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:41.475 [2024-11-18 06:50:34.416126] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:41.475 [2024-11-18 06:50:34.416132] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:41.475 [2024-11-18 06:50:34.416138] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:41.475 [2024-11-18 06:50:34.416146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.475 [2024-11-18 06:50:34.416152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:41.475 [2024-11-18 06:50:34.416158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:16:41.475 [2024-11-18 06:50:34.416164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.475 [2024-11-18 06:50:34.416239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.475 [2024-11-18 06:50:34.416248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:41.475 [2024-11-18 06:50:34.416253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:41.475 [2024-11-18 06:50:34.416260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.475 [2024-11-18 06:50:34.416346] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:41.475 [2024-11-18 06:50:34.416356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:41.475 [2024-11-18 06:50:34.416362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:41.475 [2024-11-18 06:50:34.416369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:41.475 [2024-11-18 06:50:34.416374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:41.475 [2024-11-18 06:50:34.416380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:41.475 [2024-11-18 06:50:34.416385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:41.475 [2024-11-18 06:50:34.416391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:41.475 [2024-11-18 06:50:34.416397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:41.475 [2024-11-18 06:50:34.416404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:41.475 [2024-11-18 06:50:34.416409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:41.475 [2024-11-18 06:50:34.416416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:41.475 [2024-11-18 06:50:34.416420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:41.475 [2024-11-18 06:50:34.416427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:41.475 [2024-11-18 06:50:34.416433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:41.475 [2024-11-18 06:50:34.416440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:41.475 [2024-11-18 06:50:34.416445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:41.475 [2024-11-18 06:50:34.416452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:41.475 [2024-11-18 06:50:34.416458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:41.475 [2024-11-18 06:50:34.416465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:41.475 [2024-11-18 06:50:34.416471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:41.475 [2024-11-18 06:50:34.416479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:41.475 [2024-11-18 06:50:34.416484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:41.475 [2024-11-18 06:50:34.416491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:41.475 [2024-11-18 06:50:34.416497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:41.475 [2024-11-18 06:50:34.416503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:41.475 [2024-11-18 06:50:34.416509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:41.475 [2024-11-18 06:50:34.416515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:41.475 [2024-11-18 06:50:34.416521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:41.475 [2024-11-18 06:50:34.416529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:41.475 [2024-11-18 06:50:34.416535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:41.475 [2024-11-18 06:50:34.416542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:41.475 [2024-11-18 06:50:34.416547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:41.475 [2024-11-18 06:50:34.416554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:41.475 [2024-11-18 06:50:34.416560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:41.475 [2024-11-18 06:50:34.416568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:41.475 [2024-11-18 06:50:34.416574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:41.475 [2024-11-18 06:50:34.416580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:41.475 [2024-11-18 06:50:34.416586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:41.475 [2024-11-18 06:50:34.416593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:41.475 [2024-11-18 06:50:34.416598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:41.475 [2024-11-18 06:50:34.416605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:41.475 [2024-11-18 06:50:34.416611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:41.475 [2024-11-18 06:50:34.416617] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:41.475 [2024-11-18 06:50:34.416623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:41.475 [2024-11-18 06:50:34.416632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:41.475 [2024-11-18 06:50:34.416639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:41.475 [2024-11-18 06:50:34.416647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:41.475 [2024-11-18 06:50:34.416653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:41.475 [2024-11-18 06:50:34.416660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:41.475 [2024-11-18 06:50:34.416667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:41.475 [2024-11-18 06:50:34.416674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:41.475 [2024-11-18 06:50:34.416679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:41.475 [2024-11-18 06:50:34.416689] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:41.475 [2024-11-18 06:50:34.416696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:41.475 [2024-11-18 06:50:34.416705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:41.475 [2024-11-18 06:50:34.416711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:41.475 [2024-11-18 06:50:34.416719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:41.475 [2024-11-18 06:50:34.416725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:41.475 [2024-11-18 06:50:34.416732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:41.475 [2024-11-18 06:50:34.416741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:41.476 [2024-11-18 06:50:34.416750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:41.476 [2024-11-18 06:50:34.416757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:41.476 [2024-11-18 06:50:34.416764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:41.476 [2024-11-18 06:50:34.416770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:41.476 [2024-11-18 06:50:34.416777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:41.476 [2024-11-18 06:50:34.416783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:41.476 [2024-11-18 06:50:34.416791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:41.476 [2024-11-18 06:50:34.416798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:41.476 [2024-11-18 06:50:34.416805] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:41.476 [2024-11-18 06:50:34.416812] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:41.476 [2024-11-18 06:50:34.416821] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:41.476 [2024-11-18 06:50:34.416827] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:41.476 [2024-11-18 06:50:34.416834] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:41.476 [2024-11-18 06:50:34.416840] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:41.476 [2024-11-18 06:50:34.416847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.476 [2024-11-18 06:50:34.416852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:41.476 [2024-11-18 06:50:34.416861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:16:41.476 [2024-11-18 06:50:34.416867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.476 [2024-11-18 06:50:34.416925] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:41.476 [2024-11-18 06:50:34.416932] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:44.023 [2024-11-18 06:50:36.598507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.023 [2024-11-18 06:50:36.598738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:44.023 [2024-11-18 06:50:36.598762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2181.553 ms 00:16:44.023 [2024-11-18 06:50:36.598774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.023 [2024-11-18 06:50:36.607381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.023 [2024-11-18 06:50:36.607420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:44.023 [2024-11-18 06:50:36.607435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.495 ms 00:16:44.023 [2024-11-18 06:50:36.607445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.023 [2024-11-18 06:50:36.607574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.023 [2024-11-18 06:50:36.607584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:44.023 [2024-11-18 06:50:36.607594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:16:44.023 [2024-11-18 06:50:36.607604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.023 [2024-11-18 06:50:36.639676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.023 [2024-11-18 06:50:36.639766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:44.023 [2024-11-18 06:50:36.639803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.033 ms 00:16:44.023 [2024-11-18 06:50:36.639825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.023 [2024-11-18 06:50:36.640074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.024 [2024-11-18 06:50:36.640107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:44.024 [2024-11-18 06:50:36.640141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:44.024 [2024-11-18 06:50:36.640161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.024 [2024-11-18 06:50:36.640675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.024 [2024-11-18 06:50:36.641013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:44.024 [2024-11-18 06:50:36.641063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:16:44.024 [2024-11-18 06:50:36.641084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.024 [2024-11-18 06:50:36.641410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.024 [2024-11-18 06:50:36.641467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:44.024 [2024-11-18 06:50:36.641501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:16:44.024 [2024-11-18 06:50:36.641529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.024 [2024-11-18 06:50:36.649373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.024 [2024-11-18 06:50:36.649403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:44.024 [2024-11-18 06:50:36.649415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.769 ms 00:16:44.024 [2024-11-18 06:50:36.649422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.024 [2024-11-18 06:50:36.657639] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:44.024 [2024-11-18 06:50:36.672169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.024 [2024-11-18 06:50:36.672202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:44.024 [2024-11-18 06:50:36.672214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.663 ms 00:16:44.024 [2024-11-18 06:50:36.672223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.024 [2024-11-18 06:50:36.727458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.024 [2024-11-18 06:50:36.727610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:44.024 [2024-11-18 06:50:36.727627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.161 ms 00:16:44.024 [2024-11-18 06:50:36.727639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.024 [2024-11-18 06:50:36.727823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.024 [2024-11-18 06:50:36.727836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:44.024 [2024-11-18 06:50:36.727844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:16:44.024 [2024-11-18 06:50:36.727853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.024 [2024-11-18 06:50:36.730806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.024 [2024-11-18 06:50:36.730925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:44.024 [2024-11-18 06:50:36.730940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.915 ms 00:16:44.024 [2024-11-18 06:50:36.730961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.024 [2024-11-18 06:50:36.733353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.024 [2024-11-18 06:50:36.733387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:44.024 [2024-11-18 06:50:36.733397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.314 ms 00:16:44.024 [2024-11-18 06:50:36.733407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.024 [2024-11-18 06:50:36.733730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.024 [2024-11-18 06:50:36.733746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:44.024 [2024-11-18 06:50:36.733755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:16:44.024 [2024-11-18 06:50:36.733766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.024 [2024-11-18 06:50:36.759063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.024 [2024-11-18 06:50:36.759098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:44.024 [2024-11-18 06:50:36.759108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.264 ms 00:16:44.024 [2024-11-18 06:50:36.759120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.024 [2024-11-18 06:50:36.763033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.024 [2024-11-18 06:50:36.763069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:44.024 [2024-11-18 06:50:36.763090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.849 ms 00:16:44.024 [2024-11-18 06:50:36.763099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.024 [2024-11-18 06:50:36.766142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.024 [2024-11-18 06:50:36.766177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:44.024 [2024-11-18 06:50:36.766186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.997 ms 00:16:44.024 [2024-11-18 06:50:36.766196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.024 [2024-11-18 06:50:36.769507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.024 [2024-11-18 06:50:36.769626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:44.024 [2024-11-18 06:50:36.769640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.267 ms 00:16:44.024 [2024-11-18 06:50:36.769651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.024 [2024-11-18 06:50:36.769713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.024 [2024-11-18 06:50:36.769725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:44.024 [2024-11-18 06:50:36.769734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:44.024 [2024-11-18 06:50:36.769742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.024 [2024-11-18 06:50:36.769815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.024 [2024-11-18 06:50:36.769825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:44.024 [2024-11-18 06:50:36.769834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:44.024 [2024-11-18 06:50:36.769843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.024 [2024-11-18 06:50:36.770720] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:44.024 [2024-11-18 06:50:36.771682] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2364.718 ms, result 0 00:16:44.024 [2024-11-18 06:50:36.772306] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:44.024 { 00:16:44.024 "name": "ftl0", 00:16:44.024 "uuid": "ec0999a9-9a7d-450e-b3a6-a004ddc4ed37" 00:16:44.024 } 00:16:44.024 06:50:36 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:44.024 06:50:36 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:16:44.024 06:50:36 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:44.024 06:50:36 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:16:44.024 06:50:36 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:44.024 06:50:36 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:44.024 06:50:36 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:44.024 06:50:36 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:44.285 [ 00:16:44.285 { 00:16:44.285 "name": "ftl0", 00:16:44.285 "aliases": [ 00:16:44.285 "ec0999a9-9a7d-450e-b3a6-a004ddc4ed37" 00:16:44.285 ], 00:16:44.285 "product_name": "FTL disk", 00:16:44.285 "block_size": 4096, 00:16:44.285 "num_blocks": 23592960, 00:16:44.285 "uuid": "ec0999a9-9a7d-450e-b3a6-a004ddc4ed37", 00:16:44.285 "assigned_rate_limits": { 00:16:44.285 "rw_ios_per_sec": 0, 00:16:44.285 "rw_mbytes_per_sec": 0, 00:16:44.285 "r_mbytes_per_sec": 0, 00:16:44.285 "w_mbytes_per_sec": 0 00:16:44.285 }, 00:16:44.285 "claimed": false, 00:16:44.285 "zoned": false, 00:16:44.285 "supported_io_types": { 00:16:44.285 "read": true, 00:16:44.285 "write": true, 00:16:44.285 "unmap": true, 00:16:44.285 "flush": true, 00:16:44.285 "reset": false, 00:16:44.285 "nvme_admin": false, 00:16:44.285 "nvme_io": false, 00:16:44.285 "nvme_io_md": false, 00:16:44.285 "write_zeroes": true, 00:16:44.285 "zcopy": false, 00:16:44.285 "get_zone_info": false, 00:16:44.285 "zone_management": false, 00:16:44.285 "zone_append": false, 00:16:44.285 "compare": false, 00:16:44.285 "compare_and_write": false, 00:16:44.285 "abort": false, 00:16:44.285 "seek_hole": false, 00:16:44.285 "seek_data": false, 00:16:44.285 "copy": false, 00:16:44.285 "nvme_iov_md": false 00:16:44.285 }, 00:16:44.286 "driver_specific": { 00:16:44.286 "ftl": { 00:16:44.286 "base_bdev": "f45ad49b-62d2-4991-871a-3fd51ecec48c", 00:16:44.286 "cache": "nvc0n1p0" 00:16:44.286 } 00:16:44.286 } 00:16:44.286 } 00:16:44.286 ] 00:16:44.286 06:50:37 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:16:44.286 06:50:37 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:44.286 06:50:37 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:44.547 06:50:37 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:16:44.547 06:50:37 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:44.547 06:50:37 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:44.547 { 00:16:44.547 "name": "ftl0", 00:16:44.547 "aliases": [ 00:16:44.547 "ec0999a9-9a7d-450e-b3a6-a004ddc4ed37" 00:16:44.547 ], 00:16:44.547 "product_name": "FTL disk", 00:16:44.547 "block_size": 4096, 00:16:44.547 "num_blocks": 23592960, 00:16:44.547 "uuid": "ec0999a9-9a7d-450e-b3a6-a004ddc4ed37", 00:16:44.547 "assigned_rate_limits": { 00:16:44.547 "rw_ios_per_sec": 0, 00:16:44.547 "rw_mbytes_per_sec": 0, 00:16:44.547 "r_mbytes_per_sec": 0, 00:16:44.547 "w_mbytes_per_sec": 0 00:16:44.547 }, 00:16:44.547 "claimed": false, 00:16:44.547 "zoned": false, 00:16:44.547 "supported_io_types": { 00:16:44.547 "read": true, 00:16:44.547 "write": true, 00:16:44.547 "unmap": true, 00:16:44.547 "flush": true, 00:16:44.547 "reset": false, 00:16:44.547 "nvme_admin": false, 00:16:44.547 "nvme_io": false, 00:16:44.547 "nvme_io_md": false, 00:16:44.547 "write_zeroes": true, 00:16:44.547 "zcopy": false, 00:16:44.547 "get_zone_info": false, 00:16:44.547 "zone_management": false, 00:16:44.547 "zone_append": false, 00:16:44.547 "compare": false, 00:16:44.547 "compare_and_write": false, 00:16:44.547 "abort": false, 00:16:44.547 "seek_hole": false, 00:16:44.547 "seek_data": false, 00:16:44.547 "copy": false, 00:16:44.547 "nvme_iov_md": false 00:16:44.547 }, 00:16:44.548 "driver_specific": { 00:16:44.548 "ftl": { 00:16:44.548 "base_bdev": "f45ad49b-62d2-4991-871a-3fd51ecec48c", 00:16:44.548 "cache": "nvc0n1p0" 00:16:44.548 } 00:16:44.548 } 00:16:44.548 } 00:16:44.548 ]' 00:16:44.548 06:50:37 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:44.548 06:50:37 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:16:44.548 06:50:37 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:44.811 [2024-11-18 06:50:37.805117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.811 [2024-11-18 06:50:37.805157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:44.811 [2024-11-18 06:50:37.805171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:44.811 [2024-11-18 06:50:37.805179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.811 [2024-11-18 06:50:37.805217] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:44.811 [2024-11-18 06:50:37.805653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.811 [2024-11-18 06:50:37.805670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:44.811 [2024-11-18 06:50:37.805679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:16:44.811 [2024-11-18 06:50:37.805688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.811 [2024-11-18 06:50:37.806312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.811 [2024-11-18 06:50:37.806334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:44.811 [2024-11-18 06:50:37.806343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.591 ms 00:16:44.811 [2024-11-18 06:50:37.806352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.811 [2024-11-18 06:50:37.809995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.811 [2024-11-18 06:50:37.810018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:44.811 [2024-11-18 06:50:37.810028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.618 ms 00:16:44.811 [2024-11-18 06:50:37.810041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.811 [2024-11-18 06:50:37.816990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.811 [2024-11-18 06:50:37.817023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:44.811 [2024-11-18 06:50:37.817034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.912 ms 00:16:44.811 [2024-11-18 06:50:37.817046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.811 [2024-11-18 06:50:37.819254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.811 [2024-11-18 06:50:37.819290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:44.811 [2024-11-18 06:50:37.819298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.130 ms 00:16:44.811 [2024-11-18 06:50:37.819307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.811 [2024-11-18 06:50:37.824298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.811 [2024-11-18 06:50:37.824335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:44.811 [2024-11-18 06:50:37.824345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.944 ms 00:16:44.811 [2024-11-18 06:50:37.824354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.811 [2024-11-18 06:50:37.824545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.811 [2024-11-18 06:50:37.824557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:44.811 [2024-11-18 06:50:37.824565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:16:44.811 [2024-11-18 06:50:37.824574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.811 [2024-11-18 06:50:37.827237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.811 [2024-11-18 06:50:37.827349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:44.811 [2024-11-18 06:50:37.827363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.631 ms 00:16:44.811 [2024-11-18 06:50:37.827376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.811 [2024-11-18 06:50:37.829441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.811 [2024-11-18 06:50:37.829475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:44.811 [2024-11-18 06:50:37.829483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.004 ms 00:16:44.811 [2024-11-18 06:50:37.829494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.811 [2024-11-18 06:50:37.831180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.811 [2024-11-18 06:50:37.831213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:44.811 [2024-11-18 06:50:37.831221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.638 ms 00:16:44.811 [2024-11-18 06:50:37.831230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.811 [2024-11-18 06:50:37.832712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.811 [2024-11-18 06:50:37.832745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:44.811 [2024-11-18 06:50:37.832753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.387 ms 00:16:44.811 [2024-11-18 06:50:37.832762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.811 [2024-11-18 06:50:37.832802] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:44.811 [2024-11-18 06:50:37.832817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:44.811 [2024-11-18 06:50:37.832827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:44.811 [2024-11-18 06:50:37.832839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:44.811 [2024-11-18 06:50:37.832846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:44.811 [2024-11-18 06:50:37.832856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:44.811 [2024-11-18 06:50:37.832863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:44.811 [2024-11-18 06:50:37.832872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:44.811 [2024-11-18 06:50:37.832880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:44.811 [2024-11-18 06:50:37.832889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:44.811 [2024-11-18 06:50:37.832896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:44.811 [2024-11-18 06:50:37.832905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:44.811 [2024-11-18 06:50:37.832913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:44.811 [2024-11-18 06:50:37.832922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:44.811 [2024-11-18 06:50:37.832930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:44.811 [2024-11-18 06:50:37.832939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:44.811 [2024-11-18 06:50:37.832946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:44.811 [2024-11-18 06:50:37.832957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:44.811 [2024-11-18 06:50:37.832964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:44.811 [2024-11-18 06:50:37.832991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:44.811 [2024-11-18 06:50:37.832999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:44.811 [2024-11-18 06:50:37.833009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:44.811 [2024-11-18 06:50:37.833016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:44.811 [2024-11-18 06:50:37.833025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:44.811 [2024-11-18 06:50:37.833032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:44.811 [2024-11-18 06:50:37.833054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:44.812 [2024-11-18 06:50:37.833690] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:44.812 [2024-11-18 06:50:37.833698] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ec0999a9-9a7d-450e-b3a6-a004ddc4ed37 00:16:44.812 [2024-11-18 06:50:37.833707] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:44.812 [2024-11-18 06:50:37.833725] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:44.812 [2024-11-18 06:50:37.833734] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:44.812 [2024-11-18 06:50:37.833743] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:44.812 [2024-11-18 06:50:37.833751] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:44.812 [2024-11-18 06:50:37.833759] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:44.812 [2024-11-18 06:50:37.833767] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:44.812 [2024-11-18 06:50:37.833773] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:44.812 [2024-11-18 06:50:37.833781] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:44.812 [2024-11-18 06:50:37.833788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.812 [2024-11-18 06:50:37.833796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:44.812 [2024-11-18 06:50:37.833808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.986 ms 00:16:44.812 [2024-11-18 06:50:37.833819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.812 [2024-11-18 06:50:37.835597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.813 [2024-11-18 06:50:37.835697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:44.813 [2024-11-18 06:50:37.835745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.736 ms 00:16:44.813 [2024-11-18 06:50:37.835771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.813 [2024-11-18 06:50:37.835902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.813 [2024-11-18 06:50:37.835932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:44.813 [2024-11-18 06:50:37.835994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:16:44.813 [2024-11-18 06:50:37.836059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.813 [2024-11-18 06:50:37.841407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.813 [2024-11-18 06:50:37.841509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:44.813 [2024-11-18 06:50:37.841586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.813 [2024-11-18 06:50:37.841612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.813 [2024-11-18 06:50:37.841732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.813 [2024-11-18 06:50:37.841767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:44.813 [2024-11-18 06:50:37.841865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.813 [2024-11-18 06:50:37.841893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.813 [2024-11-18 06:50:37.841985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.813 [2024-11-18 06:50:37.842021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:44.813 [2024-11-18 06:50:37.842042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.813 [2024-11-18 06:50:37.842097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.813 [2024-11-18 06:50:37.842139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.813 [2024-11-18 06:50:37.842179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:44.813 [2024-11-18 06:50:37.842199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.813 [2024-11-18 06:50:37.842219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.813 [2024-11-18 06:50:37.851572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.813 [2024-11-18 06:50:37.851706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:44.813 [2024-11-18 06:50:37.851755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.813 [2024-11-18 06:50:37.851781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.813 [2024-11-18 06:50:37.859655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.813 [2024-11-18 06:50:37.859785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:44.813 [2024-11-18 06:50:37.859840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.813 [2024-11-18 06:50:37.859866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.813 [2024-11-18 06:50:37.859928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.813 [2024-11-18 06:50:37.859955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:44.813 [2024-11-18 06:50:37.859988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.813 [2024-11-18 06:50:37.860010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.813 [2024-11-18 06:50:37.860131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.813 [2024-11-18 06:50:37.860173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:44.813 [2024-11-18 06:50:37.860193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.813 [2024-11-18 06:50:37.860266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.813 [2024-11-18 06:50:37.860372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.813 [2024-11-18 06:50:37.860442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:44.813 [2024-11-18 06:50:37.860466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.813 [2024-11-18 06:50:37.860512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.813 [2024-11-18 06:50:37.860586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.813 [2024-11-18 06:50:37.860694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:44.813 [2024-11-18 06:50:37.860718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.813 [2024-11-18 06:50:37.860762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.813 [2024-11-18 06:50:37.860823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.813 [2024-11-18 06:50:37.860853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:44.813 [2024-11-18 06:50:37.860874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.813 [2024-11-18 06:50:37.860898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.813 [2024-11-18 06:50:37.860964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.813 [2024-11-18 06:50:37.861124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:44.813 [2024-11-18 06:50:37.861149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.813 [2024-11-18 06:50:37.861170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.813 [2024-11-18 06:50:37.861383] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 56.245 ms, result 0 00:16:44.813 true 00:16:44.813 06:50:37 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 84988 00:16:44.813 06:50:37 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 84988 ']' 00:16:44.813 06:50:37 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 84988 00:16:44.813 06:50:37 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:16:44.813 06:50:37 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:44.813 06:50:37 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84988 00:16:45.074 killing process with pid 84988 00:16:45.074 06:50:37 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:45.074 06:50:37 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:45.074 06:50:37 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84988' 00:16:45.074 06:50:37 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 84988 00:16:45.074 06:50:37 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 84988 00:16:50.348 06:50:42 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:50.608 65536+0 records in 00:16:50.608 65536+0 records out 00:16:50.608 268435456 bytes (268 MB, 256 MiB) copied, 1.06819 s, 251 MB/s 00:16:50.608 06:50:43 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:50.608 [2024-11-18 06:50:43.528053] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:16:50.608 [2024-11-18 06:50:43.528150] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85153 ] 00:16:50.608 [2024-11-18 06:50:43.672872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.608 [2024-11-18 06:50:43.689578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.868 [2024-11-18 06:50:43.770347] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:50.868 [2024-11-18 06:50:43.770402] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:50.868 [2024-11-18 06:50:43.912719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.868 [2024-11-18 06:50:43.912754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:50.868 [2024-11-18 06:50:43.912763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:50.868 [2024-11-18 06:50:43.912770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.868 [2024-11-18 06:50:43.914507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.868 [2024-11-18 06:50:43.914542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:50.868 [2024-11-18 06:50:43.914550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.725 ms 00:16:50.868 [2024-11-18 06:50:43.914555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.868 [2024-11-18 06:50:43.914608] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:50.868 [2024-11-18 06:50:43.914859] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:50.868 [2024-11-18 06:50:43.914870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.868 [2024-11-18 06:50:43.914878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:50.868 [2024-11-18 06:50:43.914887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:16:50.868 [2024-11-18 06:50:43.914892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.868 [2024-11-18 06:50:43.915856] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:50.868 [2024-11-18 06:50:43.917731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.868 [2024-11-18 06:50:43.917759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:50.868 [2024-11-18 06:50:43.917767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.877 ms 00:16:50.868 [2024-11-18 06:50:43.917779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.868 [2024-11-18 06:50:43.917823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.868 [2024-11-18 06:50:43.917833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:50.868 [2024-11-18 06:50:43.917839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:50.868 [2024-11-18 06:50:43.917844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.868 [2024-11-18 06:50:43.922080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.868 [2024-11-18 06:50:43.922222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:50.868 [2024-11-18 06:50:43.922234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.208 ms 00:16:50.868 [2024-11-18 06:50:43.922245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.868 [2024-11-18 06:50:43.922337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.868 [2024-11-18 06:50:43.922346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:50.868 [2024-11-18 06:50:43.922356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:16:50.868 [2024-11-18 06:50:43.922361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.868 [2024-11-18 06:50:43.922381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.868 [2024-11-18 06:50:43.922387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:50.868 [2024-11-18 06:50:43.922393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:50.868 [2024-11-18 06:50:43.922401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.868 [2024-11-18 06:50:43.922415] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:50.868 [2024-11-18 06:50:43.923552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.868 [2024-11-18 06:50:43.923576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:50.868 [2024-11-18 06:50:43.923583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.139 ms 00:16:50.868 [2024-11-18 06:50:43.923588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.868 [2024-11-18 06:50:43.923617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.868 [2024-11-18 06:50:43.923624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:50.868 [2024-11-18 06:50:43.923632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:50.868 [2024-11-18 06:50:43.923637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.868 [2024-11-18 06:50:43.923650] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:50.868 [2024-11-18 06:50:43.923662] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:50.868 [2024-11-18 06:50:43.923688] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:50.868 [2024-11-18 06:50:43.923701] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:50.868 [2024-11-18 06:50:43.923780] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:50.868 [2024-11-18 06:50:43.923788] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:50.868 [2024-11-18 06:50:43.923795] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:50.868 [2024-11-18 06:50:43.923803] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:50.868 [2024-11-18 06:50:43.923810] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:50.868 [2024-11-18 06:50:43.923816] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:50.868 [2024-11-18 06:50:43.923822] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:50.868 [2024-11-18 06:50:43.923828] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:50.868 [2024-11-18 06:50:43.923833] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:50.868 [2024-11-18 06:50:43.923840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.868 [2024-11-18 06:50:43.923847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:50.868 [2024-11-18 06:50:43.923853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:16:50.868 [2024-11-18 06:50:43.923858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.868 [2024-11-18 06:50:43.923928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.868 [2024-11-18 06:50:43.923935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:50.868 [2024-11-18 06:50:43.923943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:50.868 [2024-11-18 06:50:43.923950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.868 [2024-11-18 06:50:43.924042] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:50.868 [2024-11-18 06:50:43.924050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:50.868 [2024-11-18 06:50:43.924059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:50.868 [2024-11-18 06:50:43.924070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:50.868 [2024-11-18 06:50:43.924076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:50.868 [2024-11-18 06:50:43.924081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:50.868 [2024-11-18 06:50:43.924087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:50.868 [2024-11-18 06:50:43.924092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:50.868 [2024-11-18 06:50:43.924099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:50.868 [2024-11-18 06:50:43.924105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:50.868 [2024-11-18 06:50:43.924109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:50.869 [2024-11-18 06:50:43.924114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:50.869 [2024-11-18 06:50:43.924120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:50.869 [2024-11-18 06:50:43.924126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:50.869 [2024-11-18 06:50:43.924131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:50.869 [2024-11-18 06:50:43.924136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:50.869 [2024-11-18 06:50:43.924141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:50.869 [2024-11-18 06:50:43.924146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:50.869 [2024-11-18 06:50:43.924151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:50.869 [2024-11-18 06:50:43.924158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:50.869 [2024-11-18 06:50:43.924163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:50.869 [2024-11-18 06:50:43.924168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:50.869 [2024-11-18 06:50:43.924173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:50.869 [2024-11-18 06:50:43.924178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:50.869 [2024-11-18 06:50:43.924187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:50.869 [2024-11-18 06:50:43.924193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:50.869 [2024-11-18 06:50:43.924198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:50.869 [2024-11-18 06:50:43.924203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:50.869 [2024-11-18 06:50:43.924208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:50.869 [2024-11-18 06:50:43.924214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:50.869 [2024-11-18 06:50:43.924219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:50.869 [2024-11-18 06:50:43.924225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:50.869 [2024-11-18 06:50:43.924231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:50.869 [2024-11-18 06:50:43.924236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:50.869 [2024-11-18 06:50:43.924241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:50.869 [2024-11-18 06:50:43.924247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:50.869 [2024-11-18 06:50:43.924252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:50.869 [2024-11-18 06:50:43.924258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:50.869 [2024-11-18 06:50:43.924263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:50.869 [2024-11-18 06:50:43.924268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:50.869 [2024-11-18 06:50:43.924276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:50.869 [2024-11-18 06:50:43.924282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:50.869 [2024-11-18 06:50:43.924287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:50.869 [2024-11-18 06:50:43.924293] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:50.869 [2024-11-18 06:50:43.924305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:50.869 [2024-11-18 06:50:43.924311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:50.869 [2024-11-18 06:50:43.924317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:50.869 [2024-11-18 06:50:43.924323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:50.869 [2024-11-18 06:50:43.924329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:50.869 [2024-11-18 06:50:43.924335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:50.869 [2024-11-18 06:50:43.924340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:50.869 [2024-11-18 06:50:43.924346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:50.869 [2024-11-18 06:50:43.924352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:50.869 [2024-11-18 06:50:43.924359] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:50.869 [2024-11-18 06:50:43.924366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:50.869 [2024-11-18 06:50:43.924373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:50.869 [2024-11-18 06:50:43.924381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:50.869 [2024-11-18 06:50:43.924387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:50.869 [2024-11-18 06:50:43.924393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:50.869 [2024-11-18 06:50:43.924399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:50.869 [2024-11-18 06:50:43.924405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:50.869 [2024-11-18 06:50:43.924411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:50.869 [2024-11-18 06:50:43.924417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:50.869 [2024-11-18 06:50:43.924423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:50.869 [2024-11-18 06:50:43.924429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:50.869 [2024-11-18 06:50:43.924435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:50.869 [2024-11-18 06:50:43.924441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:50.869 [2024-11-18 06:50:43.924447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:50.869 [2024-11-18 06:50:43.924454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:50.869 [2024-11-18 06:50:43.924459] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:50.869 [2024-11-18 06:50:43.924466] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:50.869 [2024-11-18 06:50:43.924474] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:50.869 [2024-11-18 06:50:43.924482] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:50.869 [2024-11-18 06:50:43.924489] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:50.869 [2024-11-18 06:50:43.924494] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:50.869 [2024-11-18 06:50:43.924501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.869 [2024-11-18 06:50:43.924507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:50.869 [2024-11-18 06:50:43.924514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:16:50.869 [2024-11-18 06:50:43.924520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.869 [2024-11-18 06:50:43.932166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.869 [2024-11-18 06:50:43.932193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:50.869 [2024-11-18 06:50:43.932203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.607 ms 00:16:50.869 [2024-11-18 06:50:43.932209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.869 [2024-11-18 06:50:43.932286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.869 [2024-11-18 06:50:43.932295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:50.869 [2024-11-18 06:50:43.932304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:16:50.869 [2024-11-18 06:50:43.932309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.129 [2024-11-18 06:50:43.952238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.129 [2024-11-18 06:50:43.952302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:51.129 [2024-11-18 06:50:43.952324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.903 ms 00:16:51.129 [2024-11-18 06:50:43.952339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.129 [2024-11-18 06:50:43.952466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.129 [2024-11-18 06:50:43.952491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:51.129 [2024-11-18 06:50:43.952508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:51.129 [2024-11-18 06:50:43.952521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.129 [2024-11-18 06:50:43.952903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.129 [2024-11-18 06:50:43.952928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:51.129 [2024-11-18 06:50:43.952946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:16:51.129 [2024-11-18 06:50:43.952962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.129 [2024-11-18 06:50:43.953243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.129 [2024-11-18 06:50:43.953278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:51.129 [2024-11-18 06:50:43.953308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:16:51.129 [2024-11-18 06:50:43.953321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.129 [2024-11-18 06:50:43.959818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.129 [2024-11-18 06:50:43.959843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:51.129 [2024-11-18 06:50:43.959850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.462 ms 00:16:51.129 [2024-11-18 06:50:43.959856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.129 [2024-11-18 06:50:43.961905] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:51.129 [2024-11-18 06:50:43.961940] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:51.129 [2024-11-18 06:50:43.961949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.129 [2024-11-18 06:50:43.961955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:51.129 [2024-11-18 06:50:43.961962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.027 ms 00:16:51.129 [2024-11-18 06:50:43.961967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.129 [2024-11-18 06:50:43.973463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.129 [2024-11-18 06:50:43.973497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:51.129 [2024-11-18 06:50:43.973507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.447 ms 00:16:51.129 [2024-11-18 06:50:43.973513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.129 [2024-11-18 06:50:43.975250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.129 [2024-11-18 06:50:43.975278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:51.129 [2024-11-18 06:50:43.975285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.680 ms 00:16:51.129 [2024-11-18 06:50:43.975291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.129 [2024-11-18 06:50:43.976674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.129 [2024-11-18 06:50:43.976789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:51.129 [2024-11-18 06:50:43.976806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.354 ms 00:16:51.129 [2024-11-18 06:50:43.976811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.129 [2024-11-18 06:50:43.977066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.129 [2024-11-18 06:50:43.977076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:51.129 [2024-11-18 06:50:43.977086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:16:51.129 [2024-11-18 06:50:43.977091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.129 [2024-11-18 06:50:43.991174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.129 [2024-11-18 06:50:43.991305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:51.129 [2024-11-18 06:50:43.991318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.066 ms 00:16:51.129 [2024-11-18 06:50:43.991325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.129 [2024-11-18 06:50:43.997081] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:51.129 [2024-11-18 06:50:44.008709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.129 [2024-11-18 06:50:44.008740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:51.129 [2024-11-18 06:50:44.008753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.342 ms 00:16:51.129 [2024-11-18 06:50:44.008761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.129 [2024-11-18 06:50:44.008841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.129 [2024-11-18 06:50:44.008849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:51.129 [2024-11-18 06:50:44.008856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:51.129 [2024-11-18 06:50:44.008861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.129 [2024-11-18 06:50:44.008897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.129 [2024-11-18 06:50:44.008903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:51.130 [2024-11-18 06:50:44.008914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:16:51.130 [2024-11-18 06:50:44.008920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.130 [2024-11-18 06:50:44.008938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.130 [2024-11-18 06:50:44.008945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:51.130 [2024-11-18 06:50:44.008951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:51.130 [2024-11-18 06:50:44.008956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.130 [2024-11-18 06:50:44.008996] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:51.130 [2024-11-18 06:50:44.009006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.130 [2024-11-18 06:50:44.009011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:51.130 [2024-11-18 06:50:44.009017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:51.130 [2024-11-18 06:50:44.009022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.130 [2024-11-18 06:50:44.012724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.130 [2024-11-18 06:50:44.012753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:51.130 [2024-11-18 06:50:44.012761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.686 ms 00:16:51.130 [2024-11-18 06:50:44.012767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.130 [2024-11-18 06:50:44.012834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.130 [2024-11-18 06:50:44.012844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:51.130 [2024-11-18 06:50:44.012851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:16:51.130 [2024-11-18 06:50:44.012857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.130 [2024-11-18 06:50:44.013467] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:51.130 [2024-11-18 06:50:44.014235] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 100.535 ms, result 0 00:16:51.130 [2024-11-18 06:50:44.015230] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:51.130 [2024-11-18 06:50:44.025089] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:52.074  [2024-11-18T06:50:46.101Z] Copying: 21/256 [MB] (21 MBps) [2024-11-18T06:50:47.045Z] Copying: 37/256 [MB] (16 MBps) [2024-11-18T06:50:48.432Z] Copying: 54/256 [MB] (16 MBps) [2024-11-18T06:50:49.376Z] Copying: 70/256 [MB] (16 MBps) [2024-11-18T06:50:50.319Z] Copying: 105/256 [MB] (34 MBps) [2024-11-18T06:50:51.261Z] Copying: 133/256 [MB] (28 MBps) [2024-11-18T06:50:52.205Z] Copying: 156/256 [MB] (23 MBps) [2024-11-18T06:50:53.150Z] Copying: 166/256 [MB] (10 MBps) [2024-11-18T06:50:54.093Z] Copying: 184/256 [MB] (17 MBps) [2024-11-18T06:50:55.038Z] Copying: 214/256 [MB] (30 MBps) [2024-11-18T06:50:55.038Z] Copying: 256/256 [MB] (average 23 MBps)[2024-11-18 06:50:54.879503] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:01.951 [2024-11-18 06:50:54.880484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.951 [2024-11-18 06:50:54.880502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:01.951 [2024-11-18 06:50:54.880515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:01.951 [2024-11-18 06:50:54.880522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.951 [2024-11-18 06:50:54.880538] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:01.951 [2024-11-18 06:50:54.880898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.951 [2024-11-18 06:50:54.880928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:01.951 [2024-11-18 06:50:54.880942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:17:01.951 [2024-11-18 06:50:54.880948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.951 [2024-11-18 06:50:54.882542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.951 [2024-11-18 06:50:54.882570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:01.951 [2024-11-18 06:50:54.882578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.576 ms 00:17:01.951 [2024-11-18 06:50:54.882585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.951 [2024-11-18 06:50:54.888417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.951 [2024-11-18 06:50:54.888443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:01.951 [2024-11-18 06:50:54.888451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.800 ms 00:17:01.951 [2024-11-18 06:50:54.888457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.951 [2024-11-18 06:50:54.893944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.951 [2024-11-18 06:50:54.893967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:01.951 [2024-11-18 06:50:54.893986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.452 ms 00:17:01.951 [2024-11-18 06:50:54.893992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.951 [2024-11-18 06:50:54.895230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.951 [2024-11-18 06:50:54.895334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:01.951 [2024-11-18 06:50:54.895345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.196 ms 00:17:01.951 [2024-11-18 06:50:54.895351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.951 [2024-11-18 06:50:54.898869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.951 [2024-11-18 06:50:54.898971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:01.951 [2024-11-18 06:50:54.898994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.494 ms 00:17:01.951 [2024-11-18 06:50:54.899000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.951 [2024-11-18 06:50:54.899098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.951 [2024-11-18 06:50:54.899105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:01.951 [2024-11-18 06:50:54.899112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:01.951 [2024-11-18 06:50:54.899117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.951 [2024-11-18 06:50:54.901035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.951 [2024-11-18 06:50:54.901060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:01.951 [2024-11-18 06:50:54.901067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.903 ms 00:17:01.951 [2024-11-18 06:50:54.901073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.951 [2024-11-18 06:50:54.902070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.951 [2024-11-18 06:50:54.902094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:01.951 [2024-11-18 06:50:54.902101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.972 ms 00:17:01.951 [2024-11-18 06:50:54.902106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.951 [2024-11-18 06:50:54.903161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.951 [2024-11-18 06:50:54.903186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:01.951 [2024-11-18 06:50:54.903193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.031 ms 00:17:01.951 [2024-11-18 06:50:54.903198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.951 [2024-11-18 06:50:54.904038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.951 [2024-11-18 06:50:54.904062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:01.951 [2024-11-18 06:50:54.904069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.797 ms 00:17:01.951 [2024-11-18 06:50:54.904074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.951 [2024-11-18 06:50:54.904097] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:01.951 [2024-11-18 06:50:54.904107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:01.951 [2024-11-18 06:50:54.904120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:01.951 [2024-11-18 06:50:54.904126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:01.951 [2024-11-18 06:50:54.904131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:01.951 [2024-11-18 06:50:54.904137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:01.951 [2024-11-18 06:50:54.904143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:01.951 [2024-11-18 06:50:54.904149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:01.951 [2024-11-18 06:50:54.904154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:01.951 [2024-11-18 06:50:54.904160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:01.951 [2024-11-18 06:50:54.904166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:01.951 [2024-11-18 06:50:54.904171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:01.951 [2024-11-18 06:50:54.904177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:01.951 [2024-11-18 06:50:54.904182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:01.951 [2024-11-18 06:50:54.904188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:01.951 [2024-11-18 06:50:54.904194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:01.951 [2024-11-18 06:50:54.904199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:01.951 [2024-11-18 06:50:54.904205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:01.951 [2024-11-18 06:50:54.904211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:01.951 [2024-11-18 06:50:54.904216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:01.951 [2024-11-18 06:50:54.904222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:01.951 [2024-11-18 06:50:54.904227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:01.951 [2024-11-18 06:50:54.904233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:01.951 [2024-11-18 06:50:54.904238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:01.951 [2024-11-18 06:50:54.904244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:01.951 [2024-11-18 06:50:54.904249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:01.951 [2024-11-18 06:50:54.904255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:01.951 [2024-11-18 06:50:54.904261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:01.951 [2024-11-18 06:50:54.904267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:01.951 [2024-11-18 06:50:54.904272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:01.952 [2024-11-18 06:50:54.904692] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:01.952 [2024-11-18 06:50:54.904698] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ec0999a9-9a7d-450e-b3a6-a004ddc4ed37 00:17:01.952 [2024-11-18 06:50:54.904704] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:01.952 [2024-11-18 06:50:54.904709] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:01.952 [2024-11-18 06:50:54.904715] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:01.952 [2024-11-18 06:50:54.904721] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:01.952 [2024-11-18 06:50:54.904727] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:01.952 [2024-11-18 06:50:54.904733] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:01.952 [2024-11-18 06:50:54.904738] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:01.952 [2024-11-18 06:50:54.904743] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:01.952 [2024-11-18 06:50:54.904748] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:01.952 [2024-11-18 06:50:54.904754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.952 [2024-11-18 06:50:54.904761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:01.952 [2024-11-18 06:50:54.904768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.658 ms 00:17:01.952 [2024-11-18 06:50:54.904773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.952 [2024-11-18 06:50:54.906005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.952 [2024-11-18 06:50:54.906022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:01.952 [2024-11-18 06:50:54.906029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.220 ms 00:17:01.952 [2024-11-18 06:50:54.906035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.952 [2024-11-18 06:50:54.906105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.952 [2024-11-18 06:50:54.906111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:01.952 [2024-11-18 06:50:54.906118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:17:01.953 [2024-11-18 06:50:54.906123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.953 [2024-11-18 06:50:54.910498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.953 [2024-11-18 06:50:54.910523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:01.953 [2024-11-18 06:50:54.910530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.953 [2024-11-18 06:50:54.910537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.953 [2024-11-18 06:50:54.910615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.953 [2024-11-18 06:50:54.910623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:01.953 [2024-11-18 06:50:54.910629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.953 [2024-11-18 06:50:54.910634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.953 [2024-11-18 06:50:54.910663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.953 [2024-11-18 06:50:54.910669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:01.953 [2024-11-18 06:50:54.910675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.953 [2024-11-18 06:50:54.910680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.953 [2024-11-18 06:50:54.910693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.953 [2024-11-18 06:50:54.910701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:01.953 [2024-11-18 06:50:54.910709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.953 [2024-11-18 06:50:54.910715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.953 [2024-11-18 06:50:54.918290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.953 [2024-11-18 06:50:54.918324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:01.953 [2024-11-18 06:50:54.918337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.953 [2024-11-18 06:50:54.918343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.953 [2024-11-18 06:50:54.924282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.953 [2024-11-18 06:50:54.924418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:01.953 [2024-11-18 06:50:54.924430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.953 [2024-11-18 06:50:54.924437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.953 [2024-11-18 06:50:54.924471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.953 [2024-11-18 06:50:54.924479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:01.953 [2024-11-18 06:50:54.924485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.953 [2024-11-18 06:50:54.924491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.953 [2024-11-18 06:50:54.924512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.953 [2024-11-18 06:50:54.924519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:01.953 [2024-11-18 06:50:54.924528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.953 [2024-11-18 06:50:54.924534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.953 [2024-11-18 06:50:54.924588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.953 [2024-11-18 06:50:54.924595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:01.953 [2024-11-18 06:50:54.924602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.953 [2024-11-18 06:50:54.924608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.953 [2024-11-18 06:50:54.924629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.953 [2024-11-18 06:50:54.924636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:01.953 [2024-11-18 06:50:54.924645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.953 [2024-11-18 06:50:54.924652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.953 [2024-11-18 06:50:54.924680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.953 [2024-11-18 06:50:54.924687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:01.953 [2024-11-18 06:50:54.924694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.953 [2024-11-18 06:50:54.924699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.953 [2024-11-18 06:50:54.924731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.953 [2024-11-18 06:50:54.924739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:01.953 [2024-11-18 06:50:54.924747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.953 [2024-11-18 06:50:54.924752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.953 [2024-11-18 06:50:54.924852] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 44.349 ms, result 0 00:17:02.526 00:17:02.526 00:17:02.526 06:50:55 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=85274 00:17:02.526 06:50:55 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 85274 00:17:02.526 06:50:55 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:02.526 06:50:55 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 85274 ']' 00:17:02.526 06:50:55 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.526 06:50:55 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:02.526 06:50:55 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.526 06:50:55 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:02.526 06:50:55 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:02.526 [2024-11-18 06:50:55.428722] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:17:02.526 [2024-11-18 06:50:55.428816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85274 ] 00:17:02.526 [2024-11-18 06:50:55.576313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.526 [2024-11-18 06:50:55.592919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.469 06:50:56 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:03.469 06:50:56 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:17:03.469 06:50:56 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:03.469 [2024-11-18 06:50:56.478370] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:03.469 [2024-11-18 06:50:56.478422] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:03.731 [2024-11-18 06:50:56.644006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.731 [2024-11-18 06:50:56.644041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:03.731 [2024-11-18 06:50:56.644051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:03.731 [2024-11-18 06:50:56.644059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.731 [2024-11-18 06:50:56.645993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.731 [2024-11-18 06:50:56.646040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:03.731 [2024-11-18 06:50:56.646049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.915 ms 00:17:03.731 [2024-11-18 06:50:56.646056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.731 [2024-11-18 06:50:56.646123] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:03.731 [2024-11-18 06:50:56.646307] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:03.731 [2024-11-18 06:50:56.646318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.731 [2024-11-18 06:50:56.646326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:03.731 [2024-11-18 06:50:56.646334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:17:03.731 [2024-11-18 06:50:56.646341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.731 [2024-11-18 06:50:56.647313] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:03.731 [2024-11-18 06:50:56.649164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.731 [2024-11-18 06:50:56.649282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:03.731 [2024-11-18 06:50:56.649297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.849 ms 00:17:03.731 [2024-11-18 06:50:56.649303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.731 [2024-11-18 06:50:56.649346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.732 [2024-11-18 06:50:56.649354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:03.732 [2024-11-18 06:50:56.649363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:03.732 [2024-11-18 06:50:56.649368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.732 [2024-11-18 06:50:56.653686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.732 [2024-11-18 06:50:56.653711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:03.732 [2024-11-18 06:50:56.653720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.275 ms 00:17:03.732 [2024-11-18 06:50:56.653726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.732 [2024-11-18 06:50:56.653802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.732 [2024-11-18 06:50:56.653810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:03.732 [2024-11-18 06:50:56.653818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:03.732 [2024-11-18 06:50:56.653825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.732 [2024-11-18 06:50:56.653845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.732 [2024-11-18 06:50:56.653851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:03.732 [2024-11-18 06:50:56.653860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:03.732 [2024-11-18 06:50:56.653865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.732 [2024-11-18 06:50:56.653885] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:03.732 [2024-11-18 06:50:56.655072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.732 [2024-11-18 06:50:56.655101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:03.732 [2024-11-18 06:50:56.655108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.194 ms 00:17:03.732 [2024-11-18 06:50:56.655117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.732 [2024-11-18 06:50:56.655148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.732 [2024-11-18 06:50:56.655157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:03.732 [2024-11-18 06:50:56.655165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:03.732 [2024-11-18 06:50:56.655172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.732 [2024-11-18 06:50:56.655186] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:03.732 [2024-11-18 06:50:56.655204] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:03.732 [2024-11-18 06:50:56.655235] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:03.732 [2024-11-18 06:50:56.655251] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:03.732 [2024-11-18 06:50:56.655331] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:03.732 [2024-11-18 06:50:56.655340] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:03.732 [2024-11-18 06:50:56.655348] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:03.732 [2024-11-18 06:50:56.655357] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:03.732 [2024-11-18 06:50:56.655364] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:03.732 [2024-11-18 06:50:56.655374] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:03.732 [2024-11-18 06:50:56.655379] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:03.732 [2024-11-18 06:50:56.655386] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:03.732 [2024-11-18 06:50:56.655391] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:03.732 [2024-11-18 06:50:56.655400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.732 [2024-11-18 06:50:56.655405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:03.732 [2024-11-18 06:50:56.655412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:17:03.732 [2024-11-18 06:50:56.655418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.732 [2024-11-18 06:50:56.655490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.732 [2024-11-18 06:50:56.655496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:03.732 [2024-11-18 06:50:56.655503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:17:03.732 [2024-11-18 06:50:56.655508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.732 [2024-11-18 06:50:56.655588] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:03.732 [2024-11-18 06:50:56.655596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:03.732 [2024-11-18 06:50:56.655603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:03.732 [2024-11-18 06:50:56.655608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:03.732 [2024-11-18 06:50:56.655617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:03.732 [2024-11-18 06:50:56.655623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:03.732 [2024-11-18 06:50:56.655629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:03.732 [2024-11-18 06:50:56.655639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:03.732 [2024-11-18 06:50:56.655646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:03.732 [2024-11-18 06:50:56.655652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:03.732 [2024-11-18 06:50:56.655658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:03.732 [2024-11-18 06:50:56.655663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:03.732 [2024-11-18 06:50:56.655669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:03.732 [2024-11-18 06:50:56.655674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:03.732 [2024-11-18 06:50:56.655682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:03.732 [2024-11-18 06:50:56.655687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:03.732 [2024-11-18 06:50:56.655693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:03.732 [2024-11-18 06:50:56.655698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:03.732 [2024-11-18 06:50:56.655704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:03.732 [2024-11-18 06:50:56.655709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:03.732 [2024-11-18 06:50:56.655717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:03.732 [2024-11-18 06:50:56.655722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:03.732 [2024-11-18 06:50:56.655729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:03.732 [2024-11-18 06:50:56.655735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:03.732 [2024-11-18 06:50:56.655742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:03.732 [2024-11-18 06:50:56.655748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:03.732 [2024-11-18 06:50:56.655755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:03.732 [2024-11-18 06:50:56.655761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:03.732 [2024-11-18 06:50:56.655768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:03.732 [2024-11-18 06:50:56.655774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:03.732 [2024-11-18 06:50:56.655782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:03.732 [2024-11-18 06:50:56.655787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:03.732 [2024-11-18 06:50:56.655794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:03.732 [2024-11-18 06:50:56.655800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:03.732 [2024-11-18 06:50:56.655806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:03.732 [2024-11-18 06:50:56.655812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:03.732 [2024-11-18 06:50:56.655820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:03.732 [2024-11-18 06:50:56.655826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:03.732 [2024-11-18 06:50:56.655833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:03.732 [2024-11-18 06:50:56.655838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:03.732 [2024-11-18 06:50:56.655846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:03.732 [2024-11-18 06:50:56.655851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:03.732 [2024-11-18 06:50:56.655859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:03.732 [2024-11-18 06:50:56.655864] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:03.732 [2024-11-18 06:50:56.655872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:03.732 [2024-11-18 06:50:56.655878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:03.732 [2024-11-18 06:50:56.655886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:03.732 [2024-11-18 06:50:56.655893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:03.732 [2024-11-18 06:50:56.655900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:03.732 [2024-11-18 06:50:56.655906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:03.732 [2024-11-18 06:50:56.655913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:03.732 [2024-11-18 06:50:56.655918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:03.732 [2024-11-18 06:50:56.655926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:03.732 [2024-11-18 06:50:56.655933] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:03.732 [2024-11-18 06:50:56.655942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:03.732 [2024-11-18 06:50:56.655948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:03.733 [2024-11-18 06:50:56.655957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:03.733 [2024-11-18 06:50:56.655963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:03.733 [2024-11-18 06:50:56.655971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:03.733 [2024-11-18 06:50:56.655987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:03.733 [2024-11-18 06:50:56.655994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:03.733 [2024-11-18 06:50:56.656001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:03.733 [2024-11-18 06:50:56.656008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:03.733 [2024-11-18 06:50:56.656014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:03.733 [2024-11-18 06:50:56.656021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:03.733 [2024-11-18 06:50:56.656028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:03.733 [2024-11-18 06:50:56.656035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:03.733 [2024-11-18 06:50:56.656041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:03.733 [2024-11-18 06:50:56.656050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:03.733 [2024-11-18 06:50:56.656056] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:03.733 [2024-11-18 06:50:56.656065] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:03.733 [2024-11-18 06:50:56.656073] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:03.733 [2024-11-18 06:50:56.656080] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:03.733 [2024-11-18 06:50:56.656086] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:03.733 [2024-11-18 06:50:56.656094] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:03.733 [2024-11-18 06:50:56.656100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.733 [2024-11-18 06:50:56.656108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:03.733 [2024-11-18 06:50:56.656116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:17:03.733 [2024-11-18 06:50:56.656123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.733 [2024-11-18 06:50:56.663873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.733 [2024-11-18 06:50:56.664012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:03.733 [2024-11-18 06:50:56.664025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.707 ms 00:17:03.733 [2024-11-18 06:50:56.664036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.733 [2024-11-18 06:50:56.664128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.733 [2024-11-18 06:50:56.664144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:03.733 [2024-11-18 06:50:56.664151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:03.733 [2024-11-18 06:50:56.664158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.733 [2024-11-18 06:50:56.671469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.733 [2024-11-18 06:50:56.671580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:03.733 [2024-11-18 06:50:56.671591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.294 ms 00:17:03.733 [2024-11-18 06:50:56.671599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.733 [2024-11-18 06:50:56.671642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.733 [2024-11-18 06:50:56.671651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:03.733 [2024-11-18 06:50:56.671658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:03.733 [2024-11-18 06:50:56.671665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.733 [2024-11-18 06:50:56.671945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.733 [2024-11-18 06:50:56.671966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:03.733 [2024-11-18 06:50:56.671983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:17:03.733 [2024-11-18 06:50:56.671991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.733 [2024-11-18 06:50:56.672092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.733 [2024-11-18 06:50:56.672107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:03.733 [2024-11-18 06:50:56.672114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:17:03.733 [2024-11-18 06:50:56.672121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.733 [2024-11-18 06:50:56.676732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.733 [2024-11-18 06:50:56.676760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:03.733 [2024-11-18 06:50:56.676768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.595 ms 00:17:03.733 [2024-11-18 06:50:56.676775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.733 [2024-11-18 06:50:56.678815] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:03.733 [2024-11-18 06:50:56.678925] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:03.733 [2024-11-18 06:50:56.678937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.733 [2024-11-18 06:50:56.678945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:03.733 [2024-11-18 06:50:56.678951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.092 ms 00:17:03.733 [2024-11-18 06:50:56.678958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.733 [2024-11-18 06:50:56.690032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.733 [2024-11-18 06:50:56.690061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:03.733 [2024-11-18 06:50:56.690070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.037 ms 00:17:03.733 [2024-11-18 06:50:56.690082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.733 [2024-11-18 06:50:56.691513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.733 [2024-11-18 06:50:56.691541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:03.733 [2024-11-18 06:50:56.691548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.380 ms 00:17:03.733 [2024-11-18 06:50:56.691555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.733 [2024-11-18 06:50:56.692705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.733 [2024-11-18 06:50:56.692731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:03.733 [2024-11-18 06:50:56.692738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.122 ms 00:17:03.733 [2024-11-18 06:50:56.692745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.733 [2024-11-18 06:50:56.693003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.733 [2024-11-18 06:50:56.693019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:03.733 [2024-11-18 06:50:56.693026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:17:03.733 [2024-11-18 06:50:56.693033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.733 [2024-11-18 06:50:56.719330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.733 [2024-11-18 06:50:56.719484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:03.733 [2024-11-18 06:50:56.719503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.279 ms 00:17:03.733 [2024-11-18 06:50:56.719515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.733 [2024-11-18 06:50:56.726843] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:03.733 [2024-11-18 06:50:56.738031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.733 [2024-11-18 06:50:56.738057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:03.733 [2024-11-18 06:50:56.738068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.448 ms 00:17:03.733 [2024-11-18 06:50:56.738074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.733 [2024-11-18 06:50:56.738143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.733 [2024-11-18 06:50:56.738154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:03.733 [2024-11-18 06:50:56.738163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:03.733 [2024-11-18 06:50:56.738169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.733 [2024-11-18 06:50:56.738208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.733 [2024-11-18 06:50:56.738216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:03.733 [2024-11-18 06:50:56.738223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:03.733 [2024-11-18 06:50:56.738229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.733 [2024-11-18 06:50:56.738249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.733 [2024-11-18 06:50:56.738256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:03.733 [2024-11-18 06:50:56.738265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:03.733 [2024-11-18 06:50:56.738272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.733 [2024-11-18 06:50:56.738296] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:03.733 [2024-11-18 06:50:56.738303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.733 [2024-11-18 06:50:56.738310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:03.733 [2024-11-18 06:50:56.738316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:03.733 [2024-11-18 06:50:56.738322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.733 [2024-11-18 06:50:56.741449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.733 [2024-11-18 06:50:56.741565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:03.733 [2024-11-18 06:50:56.741579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.110 ms 00:17:03.733 [2024-11-18 06:50:56.741589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.733 [2024-11-18 06:50:56.741647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.734 [2024-11-18 06:50:56.741657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:03.734 [2024-11-18 06:50:56.741666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:03.734 [2024-11-18 06:50:56.741673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.734 [2024-11-18 06:50:56.742328] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:03.734 [2024-11-18 06:50:56.743115] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 98.138 ms, result 0 00:17:03.734 [2024-11-18 06:50:56.744074] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:03.734 Some configs were skipped because the RPC state that can call them passed over. 00:17:03.734 06:50:56 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:03.995 [2024-11-18 06:50:56.965019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.995 [2024-11-18 06:50:56.965127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:03.995 [2024-11-18 06:50:56.965177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.495 ms 00:17:03.995 [2024-11-18 06:50:56.965196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.995 [2024-11-18 06:50:56.965236] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.711 ms, result 0 00:17:03.995 true 00:17:03.995 06:50:56 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:04.257 [2024-11-18 06:50:57.168930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.258 [2024-11-18 06:50:57.169055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:04.258 [2024-11-18 06:50:57.169098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.214 ms 00:17:04.258 [2024-11-18 06:50:57.169117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.258 [2024-11-18 06:50:57.169157] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.439 ms, result 0 00:17:04.258 true 00:17:04.258 06:50:57 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 85274 00:17:04.258 06:50:57 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 85274 ']' 00:17:04.258 06:50:57 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 85274 00:17:04.258 06:50:57 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:17:04.258 06:50:57 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:04.258 06:50:57 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85274 00:17:04.258 killing process with pid 85274 00:17:04.258 06:50:57 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:04.258 06:50:57 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:04.258 06:50:57 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85274' 00:17:04.258 06:50:57 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 85274 00:17:04.258 06:50:57 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 85274 00:17:04.258 [2024-11-18 06:50:57.300291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.258 [2024-11-18 06:50:57.300464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:04.258 [2024-11-18 06:50:57.300512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:04.258 [2024-11-18 06:50:57.300522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.258 [2024-11-18 06:50:57.300551] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:04.258 [2024-11-18 06:50:57.300942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.258 [2024-11-18 06:50:57.300958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:04.258 [2024-11-18 06:50:57.300967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:17:04.258 [2024-11-18 06:50:57.300984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.258 [2024-11-18 06:50:57.301195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.258 [2024-11-18 06:50:57.301204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:04.258 [2024-11-18 06:50:57.301211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:17:04.258 [2024-11-18 06:50:57.301218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.258 [2024-11-18 06:50:57.304913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.258 [2024-11-18 06:50:57.304945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:04.258 [2024-11-18 06:50:57.304952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.681 ms 00:17:04.258 [2024-11-18 06:50:57.304960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.258 [2024-11-18 06:50:57.310147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.258 [2024-11-18 06:50:57.310172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:04.258 [2024-11-18 06:50:57.310180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.152 ms 00:17:04.258 [2024-11-18 06:50:57.310189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.258 [2024-11-18 06:50:57.312115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.258 [2024-11-18 06:50:57.312145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:04.258 [2024-11-18 06:50:57.312151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.879 ms 00:17:04.258 [2024-11-18 06:50:57.312158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.258 [2024-11-18 06:50:57.315951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.258 [2024-11-18 06:50:57.316061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:04.258 [2024-11-18 06:50:57.316073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.767 ms 00:17:04.258 [2024-11-18 06:50:57.316082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.258 [2024-11-18 06:50:57.316176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.258 [2024-11-18 06:50:57.316185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:04.258 [2024-11-18 06:50:57.316191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:04.258 [2024-11-18 06:50:57.316201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.258 [2024-11-18 06:50:57.318377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.258 [2024-11-18 06:50:57.318406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:04.258 [2024-11-18 06:50:57.318412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.162 ms 00:17:04.258 [2024-11-18 06:50:57.318422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.258 [2024-11-18 06:50:57.320311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.258 [2024-11-18 06:50:57.320338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:04.258 [2024-11-18 06:50:57.320345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.856 ms 00:17:04.258 [2024-11-18 06:50:57.320351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.258 [2024-11-18 06:50:57.321852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.258 [2024-11-18 06:50:57.321881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:04.258 [2024-11-18 06:50:57.321887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.466 ms 00:17:04.258 [2024-11-18 06:50:57.321894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.258 [2024-11-18 06:50:57.323376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.258 [2024-11-18 06:50:57.323466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:04.258 [2024-11-18 06:50:57.323477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.439 ms 00:17:04.258 [2024-11-18 06:50:57.323483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.258 [2024-11-18 06:50:57.323508] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:04.258 [2024-11-18 06:50:57.323519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:04.258 [2024-11-18 06:50:57.323526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:04.258 [2024-11-18 06:50:57.323536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:04.258 [2024-11-18 06:50:57.323541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:04.258 [2024-11-18 06:50:57.323548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:04.258 [2024-11-18 06:50:57.323554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:04.258 [2024-11-18 06:50:57.323561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:04.258 [2024-11-18 06:50:57.323566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:04.258 [2024-11-18 06:50:57.323573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:04.258 [2024-11-18 06:50:57.323579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:04.258 [2024-11-18 06:50:57.323587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:04.258 [2024-11-18 06:50:57.323593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:04.258 [2024-11-18 06:50:57.323600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:04.258 [2024-11-18 06:50:57.323605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:04.258 [2024-11-18 06:50:57.323612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:04.258 [2024-11-18 06:50:57.323617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:04.258 [2024-11-18 06:50:57.323624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:04.258 [2024-11-18 06:50:57.323630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:04.258 [2024-11-18 06:50:57.323638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:04.258 [2024-11-18 06:50:57.323644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:04.258 [2024-11-18 06:50:57.323651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:04.258 [2024-11-18 06:50:57.323657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:04.258 [2024-11-18 06:50:57.323663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:04.258 [2024-11-18 06:50:57.323669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:04.258 [2024-11-18 06:50:57.323676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:04.258 [2024-11-18 06:50:57.323682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:04.258 [2024-11-18 06:50:57.323689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:04.258 [2024-11-18 06:50:57.323695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:04.258 [2024-11-18 06:50:57.323702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.323993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.324000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.324006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.324013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.324018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.324025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.324031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.324038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.324044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.324051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.324056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.324064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.324074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.324081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.324087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.324094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.324100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.324107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.324112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.324119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.324125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.324132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.324137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.324144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.324149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.324156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.324162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.324171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.324176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:04.259 [2024-11-18 06:50:57.324189] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:04.259 [2024-11-18 06:50:57.324195] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ec0999a9-9a7d-450e-b3a6-a004ddc4ed37 00:17:04.259 [2024-11-18 06:50:57.324203] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:04.259 [2024-11-18 06:50:57.324210] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:04.259 [2024-11-18 06:50:57.324216] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:04.259 [2024-11-18 06:50:57.324222] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:04.259 [2024-11-18 06:50:57.324229] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:04.259 [2024-11-18 06:50:57.324236] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:04.259 [2024-11-18 06:50:57.324243] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:04.259 [2024-11-18 06:50:57.324248] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:04.259 [2024-11-18 06:50:57.324253] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:04.259 [2024-11-18 06:50:57.324259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.259 [2024-11-18 06:50:57.324266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:04.259 [2024-11-18 06:50:57.324273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:17:04.259 [2024-11-18 06:50:57.324283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.259 [2024-11-18 06:50:57.325476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.259 [2024-11-18 06:50:57.325490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:04.259 [2024-11-18 06:50:57.325497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.177 ms 00:17:04.259 [2024-11-18 06:50:57.325504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.259 [2024-11-18 06:50:57.325569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.259 [2024-11-18 06:50:57.325577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:04.259 [2024-11-18 06:50:57.325583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:04.260 [2024-11-18 06:50:57.325590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.260 [2024-11-18 06:50:57.330011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.260 [2024-11-18 06:50:57.330106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:04.260 [2024-11-18 06:50:57.330117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.260 [2024-11-18 06:50:57.330124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.260 [2024-11-18 06:50:57.330175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.260 [2024-11-18 06:50:57.330183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:04.260 [2024-11-18 06:50:57.330189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.260 [2024-11-18 06:50:57.330197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.260 [2024-11-18 06:50:57.330232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.260 [2024-11-18 06:50:57.330240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:04.260 [2024-11-18 06:50:57.330246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.260 [2024-11-18 06:50:57.330252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.260 [2024-11-18 06:50:57.330266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.260 [2024-11-18 06:50:57.330273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:04.260 [2024-11-18 06:50:57.330279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.260 [2024-11-18 06:50:57.330286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.260 [2024-11-18 06:50:57.338197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.260 [2024-11-18 06:50:57.338234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:04.260 [2024-11-18 06:50:57.338242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.260 [2024-11-18 06:50:57.338249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.522 [2024-11-18 06:50:57.344229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.522 [2024-11-18 06:50:57.344266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:04.522 [2024-11-18 06:50:57.344275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.522 [2024-11-18 06:50:57.344284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.522 [2024-11-18 06:50:57.344313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.522 [2024-11-18 06:50:57.344324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:04.522 [2024-11-18 06:50:57.344330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.522 [2024-11-18 06:50:57.344340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.522 [2024-11-18 06:50:57.344363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.522 [2024-11-18 06:50:57.344370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:04.522 [2024-11-18 06:50:57.344377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.522 [2024-11-18 06:50:57.344383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.522 [2024-11-18 06:50:57.344431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.522 [2024-11-18 06:50:57.344439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:04.522 [2024-11-18 06:50:57.344447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.522 [2024-11-18 06:50:57.344454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.522 [2024-11-18 06:50:57.344479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.522 [2024-11-18 06:50:57.344488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:04.522 [2024-11-18 06:50:57.344493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.522 [2024-11-18 06:50:57.344501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.522 [2024-11-18 06:50:57.344531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.522 [2024-11-18 06:50:57.344538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:04.522 [2024-11-18 06:50:57.344546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.522 [2024-11-18 06:50:57.344552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.522 [2024-11-18 06:50:57.344584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.522 [2024-11-18 06:50:57.344594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:04.522 [2024-11-18 06:50:57.344599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.522 [2024-11-18 06:50:57.344606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.522 [2024-11-18 06:50:57.344709] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 44.401 ms, result 0 00:17:04.522 06:50:57 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:04.522 06:50:57 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:04.522 [2024-11-18 06:50:57.553393] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:17:04.522 [2024-11-18 06:50:57.553524] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85310 ] 00:17:04.783 [2024-11-18 06:50:57.707192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.783 [2024-11-18 06:50:57.725791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.783 [2024-11-18 06:50:57.807155] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:04.783 [2024-11-18 06:50:57.807205] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:05.046 [2024-11-18 06:50:57.953181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.046 [2024-11-18 06:50:57.953219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:05.046 [2024-11-18 06:50:57.953229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:05.046 [2024-11-18 06:50:57.953235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.046 [2024-11-18 06:50:57.954943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.046 [2024-11-18 06:50:57.955070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:05.046 [2024-11-18 06:50:57.955083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.696 ms 00:17:05.046 [2024-11-18 06:50:57.955094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.046 [2024-11-18 06:50:57.955148] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:05.046 [2024-11-18 06:50:57.955338] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:05.046 [2024-11-18 06:50:57.955350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.046 [2024-11-18 06:50:57.955357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:05.046 [2024-11-18 06:50:57.955367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:17:05.046 [2024-11-18 06:50:57.955372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.046 [2024-11-18 06:50:57.956295] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:05.046 [2024-11-18 06:50:57.958255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.046 [2024-11-18 06:50:57.958353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:05.046 [2024-11-18 06:50:57.958364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.961 ms 00:17:05.046 [2024-11-18 06:50:57.958380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.046 [2024-11-18 06:50:57.958420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.046 [2024-11-18 06:50:57.958428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:05.046 [2024-11-18 06:50:57.958437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:05.046 [2024-11-18 06:50:57.958445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.046 [2024-11-18 06:50:57.962700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.046 [2024-11-18 06:50:57.962723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:05.046 [2024-11-18 06:50:57.962731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.222 ms 00:17:05.046 [2024-11-18 06:50:57.962737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.046 [2024-11-18 06:50:57.962822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.046 [2024-11-18 06:50:57.962836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:05.046 [2024-11-18 06:50:57.962843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:05.046 [2024-11-18 06:50:57.962848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.046 [2024-11-18 06:50:57.962867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.046 [2024-11-18 06:50:57.962874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:05.046 [2024-11-18 06:50:57.962880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:05.046 [2024-11-18 06:50:57.962886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.046 [2024-11-18 06:50:57.962900] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:05.046 [2024-11-18 06:50:57.964036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.046 [2024-11-18 06:50:57.964056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:05.046 [2024-11-18 06:50:57.964063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.139 ms 00:17:05.046 [2024-11-18 06:50:57.964068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.046 [2024-11-18 06:50:57.964096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.046 [2024-11-18 06:50:57.964103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:05.046 [2024-11-18 06:50:57.964111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:05.046 [2024-11-18 06:50:57.964117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.046 [2024-11-18 06:50:57.964133] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:05.046 [2024-11-18 06:50:57.964146] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:05.046 [2024-11-18 06:50:57.964172] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:05.046 [2024-11-18 06:50:57.964187] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:05.047 [2024-11-18 06:50:57.964265] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:05.047 [2024-11-18 06:50:57.964273] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:05.047 [2024-11-18 06:50:57.964281] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:05.047 [2024-11-18 06:50:57.964289] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:05.047 [2024-11-18 06:50:57.964295] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:05.047 [2024-11-18 06:50:57.964304] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:05.047 [2024-11-18 06:50:57.964309] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:05.047 [2024-11-18 06:50:57.964317] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:05.047 [2024-11-18 06:50:57.964323] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:05.047 [2024-11-18 06:50:57.964330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.047 [2024-11-18 06:50:57.964337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:05.047 [2024-11-18 06:50:57.964342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:17:05.047 [2024-11-18 06:50:57.964347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.047 [2024-11-18 06:50:57.964419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.047 [2024-11-18 06:50:57.964425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:05.047 [2024-11-18 06:50:57.964433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:17:05.047 [2024-11-18 06:50:57.964438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.047 [2024-11-18 06:50:57.964510] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:05.047 [2024-11-18 06:50:57.964522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:05.047 [2024-11-18 06:50:57.964530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:05.047 [2024-11-18 06:50:57.964541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.047 [2024-11-18 06:50:57.964546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:05.047 [2024-11-18 06:50:57.964551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:05.047 [2024-11-18 06:50:57.964556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:05.047 [2024-11-18 06:50:57.964562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:05.047 [2024-11-18 06:50:57.964569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:05.047 [2024-11-18 06:50:57.964574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:05.047 [2024-11-18 06:50:57.964579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:05.047 [2024-11-18 06:50:57.964583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:05.047 [2024-11-18 06:50:57.964588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:05.047 [2024-11-18 06:50:57.964594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:05.047 [2024-11-18 06:50:57.964600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:05.047 [2024-11-18 06:50:57.964605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.047 [2024-11-18 06:50:57.964609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:05.047 [2024-11-18 06:50:57.964614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:05.047 [2024-11-18 06:50:57.964619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.047 [2024-11-18 06:50:57.964624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:05.047 [2024-11-18 06:50:57.964629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:05.047 [2024-11-18 06:50:57.964634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:05.047 [2024-11-18 06:50:57.964639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:05.047 [2024-11-18 06:50:57.964644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:05.047 [2024-11-18 06:50:57.964653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:05.047 [2024-11-18 06:50:57.964658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:05.047 [2024-11-18 06:50:57.964663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:05.047 [2024-11-18 06:50:57.964667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:05.047 [2024-11-18 06:50:57.964672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:05.047 [2024-11-18 06:50:57.964677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:05.047 [2024-11-18 06:50:57.964682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:05.047 [2024-11-18 06:50:57.964687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:05.047 [2024-11-18 06:50:57.964692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:05.047 [2024-11-18 06:50:57.964696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:05.047 [2024-11-18 06:50:57.964702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:05.047 [2024-11-18 06:50:57.964707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:05.047 [2024-11-18 06:50:57.964712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:05.047 [2024-11-18 06:50:57.964716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:05.047 [2024-11-18 06:50:57.964722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:05.047 [2024-11-18 06:50:57.964726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.047 [2024-11-18 06:50:57.964733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:05.047 [2024-11-18 06:50:57.964737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:05.047 [2024-11-18 06:50:57.964742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.047 [2024-11-18 06:50:57.964747] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:05.047 [2024-11-18 06:50:57.964753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:05.047 [2024-11-18 06:50:57.964759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:05.047 [2024-11-18 06:50:57.964764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.047 [2024-11-18 06:50:57.964770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:05.047 [2024-11-18 06:50:57.964775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:05.047 [2024-11-18 06:50:57.964780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:05.047 [2024-11-18 06:50:57.964785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:05.047 [2024-11-18 06:50:57.964790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:05.047 [2024-11-18 06:50:57.964795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:05.047 [2024-11-18 06:50:57.964801] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:05.047 [2024-11-18 06:50:57.964808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:05.047 [2024-11-18 06:50:57.964816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:05.047 [2024-11-18 06:50:57.964823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:05.047 [2024-11-18 06:50:57.964829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:05.047 [2024-11-18 06:50:57.964834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:05.047 [2024-11-18 06:50:57.964840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:05.047 [2024-11-18 06:50:57.964845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:05.047 [2024-11-18 06:50:57.964850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:05.047 [2024-11-18 06:50:57.964855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:05.047 [2024-11-18 06:50:57.964861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:05.047 [2024-11-18 06:50:57.964866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:05.047 [2024-11-18 06:50:57.964872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:05.047 [2024-11-18 06:50:57.964878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:05.047 [2024-11-18 06:50:57.964883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:05.047 [2024-11-18 06:50:57.964889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:05.047 [2024-11-18 06:50:57.964894] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:05.047 [2024-11-18 06:50:57.964901] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:05.047 [2024-11-18 06:50:57.964909] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:05.047 [2024-11-18 06:50:57.964915] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:05.047 [2024-11-18 06:50:57.964921] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:05.047 [2024-11-18 06:50:57.964926] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:05.047 [2024-11-18 06:50:57.964932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.047 [2024-11-18 06:50:57.964937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:05.047 [2024-11-18 06:50:57.964943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.474 ms 00:17:05.047 [2024-11-18 06:50:57.964948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.047 [2024-11-18 06:50:57.972562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.047 [2024-11-18 06:50:57.972588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:05.048 [2024-11-18 06:50:57.972598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.563 ms 00:17:05.048 [2024-11-18 06:50:57.972604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.048 [2024-11-18 06:50:57.972689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.048 [2024-11-18 06:50:57.972699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:05.048 [2024-11-18 06:50:57.972705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:05.048 [2024-11-18 06:50:57.972710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.048 [2024-11-18 06:50:57.988018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.048 [2024-11-18 06:50:57.988057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:05.048 [2024-11-18 06:50:57.988070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.289 ms 00:17:05.048 [2024-11-18 06:50:57.988079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.048 [2024-11-18 06:50:57.988151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.048 [2024-11-18 06:50:57.988169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:05.048 [2024-11-18 06:50:57.988178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:05.048 [2024-11-18 06:50:57.988186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.048 [2024-11-18 06:50:57.988495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.048 [2024-11-18 06:50:57.988522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:05.048 [2024-11-18 06:50:57.988532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:17:05.048 [2024-11-18 06:50:57.988540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.048 [2024-11-18 06:50:57.988680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.048 [2024-11-18 06:50:57.988696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:05.048 [2024-11-18 06:50:57.988709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:17:05.048 [2024-11-18 06:50:57.988717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.048 [2024-11-18 06:50:57.993938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.048 [2024-11-18 06:50:57.993969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:05.048 [2024-11-18 06:50:57.994001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.197 ms 00:17:05.048 [2024-11-18 06:50:57.994009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.048 [2024-11-18 06:50:57.996291] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:05.048 [2024-11-18 06:50:57.996324] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:05.048 [2024-11-18 06:50:57.996342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.048 [2024-11-18 06:50:57.996350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:05.048 [2024-11-18 06:50:57.996357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.247 ms 00:17:05.048 [2024-11-18 06:50:57.996365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.048 [2024-11-18 06:50:58.009798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.048 [2024-11-18 06:50:58.009825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:05.048 [2024-11-18 06:50:58.009833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.392 ms 00:17:05.048 [2024-11-18 06:50:58.009843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.048 [2024-11-18 06:50:58.011481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.048 [2024-11-18 06:50:58.011582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:05.048 [2024-11-18 06:50:58.011594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.586 ms 00:17:05.048 [2024-11-18 06:50:58.011600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.048 [2024-11-18 06:50:58.012820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.048 [2024-11-18 06:50:58.012841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:05.048 [2024-11-18 06:50:58.012848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.194 ms 00:17:05.048 [2024-11-18 06:50:58.012857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.048 [2024-11-18 06:50:58.013104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.048 [2024-11-18 06:50:58.013125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:05.048 [2024-11-18 06:50:58.013132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:17:05.048 [2024-11-18 06:50:58.013137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.048 [2024-11-18 06:50:58.026896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.048 [2024-11-18 06:50:58.027045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:05.048 [2024-11-18 06:50:58.027060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.742 ms 00:17:05.048 [2024-11-18 06:50:58.027066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.048 [2024-11-18 06:50:58.032751] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:05.048 [2024-11-18 06:50:58.044191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.048 [2024-11-18 06:50:58.044218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:05.048 [2024-11-18 06:50:58.044232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.083 ms 00:17:05.048 [2024-11-18 06:50:58.044238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.048 [2024-11-18 06:50:58.044296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.048 [2024-11-18 06:50:58.044304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:05.048 [2024-11-18 06:50:58.044310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:05.048 [2024-11-18 06:50:58.044318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.048 [2024-11-18 06:50:58.044353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.048 [2024-11-18 06:50:58.044359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:05.048 [2024-11-18 06:50:58.044365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:05.048 [2024-11-18 06:50:58.044371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.048 [2024-11-18 06:50:58.044390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.048 [2024-11-18 06:50:58.044396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:05.048 [2024-11-18 06:50:58.044402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:05.048 [2024-11-18 06:50:58.044408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.048 [2024-11-18 06:50:58.044433] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:05.048 [2024-11-18 06:50:58.044441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.048 [2024-11-18 06:50:58.044446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:05.048 [2024-11-18 06:50:58.044453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:05.048 [2024-11-18 06:50:58.044458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.048 [2024-11-18 06:50:58.047701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.048 [2024-11-18 06:50:58.047737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:05.048 [2024-11-18 06:50:58.047747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.227 ms 00:17:05.048 [2024-11-18 06:50:58.047759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.048 [2024-11-18 06:50:58.047824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.048 [2024-11-18 06:50:58.047833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:05.048 [2024-11-18 06:50:58.047840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:05.048 [2024-11-18 06:50:58.047846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.048 [2024-11-18 06:50:58.048552] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:05.048 [2024-11-18 06:50:58.049319] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 95.172 ms, result 0 00:17:05.048 [2024-11-18 06:50:58.049806] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:05.048 [2024-11-18 06:50:58.060522] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:05.992  [2024-11-18T06:51:00.468Z] Copying: 28/256 [MB] (28 MBps) [2024-11-18T06:51:01.409Z] Copying: 38/256 [MB] (10 MBps) [2024-11-18T06:51:02.358Z] Copying: 62/256 [MB] (23 MBps) [2024-11-18T06:51:03.326Z] Copying: 74/256 [MB] (12 MBps) [2024-11-18T06:51:04.270Z] Copying: 95/256 [MB] (21 MBps) [2024-11-18T06:51:05.213Z] Copying: 118/256 [MB] (23 MBps) [2024-11-18T06:51:06.157Z] Copying: 140/256 [MB] (21 MBps) [2024-11-18T06:51:07.101Z] Copying: 163/256 [MB] (23 MBps) [2024-11-18T06:51:08.489Z] Copying: 183/256 [MB] (19 MBps) [2024-11-18T06:51:09.432Z] Copying: 196/256 [MB] (13 MBps) [2024-11-18T06:51:10.376Z] Copying: 211/256 [MB] (15 MBps) [2024-11-18T06:51:11.320Z] Copying: 228/256 [MB] (16 MBps) [2024-11-18T06:51:12.267Z] Copying: 239/256 [MB] (11 MBps) [2024-11-18T06:51:12.267Z] Copying: 256/256 [MB] (average 18 MBps)[2024-11-18 06:51:11.923836] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:19.180 [2024-11-18 06:51:11.925691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.180 [2024-11-18 06:51:11.925892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:19.180 [2024-11-18 06:51:11.925917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:19.180 [2024-11-18 06:51:11.925935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.180 [2024-11-18 06:51:11.925964] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:19.180 [2024-11-18 06:51:11.926617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.180 [2024-11-18 06:51:11.926642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:19.180 [2024-11-18 06:51:11.926652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.623 ms 00:17:19.180 [2024-11-18 06:51:11.926660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.180 [2024-11-18 06:51:11.926938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.180 [2024-11-18 06:51:11.926966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:19.180 [2024-11-18 06:51:11.926999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:17:19.180 [2024-11-18 06:51:11.927011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.180 [2024-11-18 06:51:11.930687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.180 [2024-11-18 06:51:11.930715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:19.180 [2024-11-18 06:51:11.930725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.658 ms 00:17:19.180 [2024-11-18 06:51:11.930733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.180 [2024-11-18 06:51:11.937748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.180 [2024-11-18 06:51:11.937787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:19.180 [2024-11-18 06:51:11.937798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.983 ms 00:17:19.180 [2024-11-18 06:51:11.937817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.180 [2024-11-18 06:51:11.940349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.180 [2024-11-18 06:51:11.940531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:19.180 [2024-11-18 06:51:11.940550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.485 ms 00:17:19.180 [2024-11-18 06:51:11.940558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.180 [2024-11-18 06:51:11.946212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.180 [2024-11-18 06:51:11.946273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:19.180 [2024-11-18 06:51:11.946283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.553 ms 00:17:19.180 [2024-11-18 06:51:11.946291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.180 [2024-11-18 06:51:11.946428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.180 [2024-11-18 06:51:11.946438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:19.180 [2024-11-18 06:51:11.946447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:19.180 [2024-11-18 06:51:11.946459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.180 [2024-11-18 06:51:11.949814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.180 [2024-11-18 06:51:11.949860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:19.180 [2024-11-18 06:51:11.949870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.337 ms 00:17:19.180 [2024-11-18 06:51:11.949877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.180 [2024-11-18 06:51:11.952577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.180 [2024-11-18 06:51:11.952623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:19.180 [2024-11-18 06:51:11.952632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.657 ms 00:17:19.180 [2024-11-18 06:51:11.952639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.180 [2024-11-18 06:51:11.954956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.180 [2024-11-18 06:51:11.955015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:19.180 [2024-11-18 06:51:11.955024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.275 ms 00:17:19.180 [2024-11-18 06:51:11.955030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.180 [2024-11-18 06:51:11.957502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.180 [2024-11-18 06:51:11.957548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:19.180 [2024-11-18 06:51:11.957557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.399 ms 00:17:19.180 [2024-11-18 06:51:11.957564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.180 [2024-11-18 06:51:11.957602] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:19.180 [2024-11-18 06:51:11.957617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:19.180 [2024-11-18 06:51:11.957627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:19.180 [2024-11-18 06:51:11.957635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.957996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:19.181 [2024-11-18 06:51:11.958287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:19.182 [2024-11-18 06:51:11.958295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:19.182 [2024-11-18 06:51:11.958302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:19.182 [2024-11-18 06:51:11.958309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:19.182 [2024-11-18 06:51:11.958317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:19.182 [2024-11-18 06:51:11.958324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:19.182 [2024-11-18 06:51:11.958332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:19.182 [2024-11-18 06:51:11.958340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:19.182 [2024-11-18 06:51:11.958347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:19.182 [2024-11-18 06:51:11.958354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:19.182 [2024-11-18 06:51:11.958361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:19.182 [2024-11-18 06:51:11.958370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:19.182 [2024-11-18 06:51:11.958379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:19.182 [2024-11-18 06:51:11.958386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:19.182 [2024-11-18 06:51:11.958394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:19.182 [2024-11-18 06:51:11.958415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:19.182 [2024-11-18 06:51:11.958423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:19.182 [2024-11-18 06:51:11.958440] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:19.182 [2024-11-18 06:51:11.958449] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ec0999a9-9a7d-450e-b3a6-a004ddc4ed37 00:17:19.182 [2024-11-18 06:51:11.958459] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:19.182 [2024-11-18 06:51:11.958467] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:19.182 [2024-11-18 06:51:11.958475] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:19.182 [2024-11-18 06:51:11.958483] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:19.182 [2024-11-18 06:51:11.958491] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:19.182 [2024-11-18 06:51:11.958504] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:19.182 [2024-11-18 06:51:11.958513] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:19.182 [2024-11-18 06:51:11.958519] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:19.182 [2024-11-18 06:51:11.958527] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:19.182 [2024-11-18 06:51:11.958534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.182 [2024-11-18 06:51:11.958546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:19.182 [2024-11-18 06:51:11.958556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.933 ms 00:17:19.182 [2024-11-18 06:51:11.958564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.182 [2024-11-18 06:51:11.960993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.182 [2024-11-18 06:51:11.961023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:19.182 [2024-11-18 06:51:11.961041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.410 ms 00:17:19.182 [2024-11-18 06:51:11.961049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.182 [2024-11-18 06:51:11.961176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.182 [2024-11-18 06:51:11.961184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:19.182 [2024-11-18 06:51:11.961193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:17:19.182 [2024-11-18 06:51:11.961200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.182 [2024-11-18 06:51:11.969617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.182 [2024-11-18 06:51:11.969806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:19.182 [2024-11-18 06:51:11.969825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.182 [2024-11-18 06:51:11.969833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.182 [2024-11-18 06:51:11.969924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.182 [2024-11-18 06:51:11.969933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:19.182 [2024-11-18 06:51:11.969941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.182 [2024-11-18 06:51:11.969949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.182 [2024-11-18 06:51:11.970053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.182 [2024-11-18 06:51:11.970065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:19.182 [2024-11-18 06:51:11.970072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.182 [2024-11-18 06:51:11.970084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.182 [2024-11-18 06:51:11.970104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.182 [2024-11-18 06:51:11.970112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:19.182 [2024-11-18 06:51:11.970121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.182 [2024-11-18 06:51:11.970129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.182 [2024-11-18 06:51:11.984969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.182 [2024-11-18 06:51:11.985072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:19.182 [2024-11-18 06:51:11.985091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.182 [2024-11-18 06:51:11.985100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.182 [2024-11-18 06:51:11.996042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.182 [2024-11-18 06:51:11.996234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:19.182 [2024-11-18 06:51:11.996252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.182 [2024-11-18 06:51:11.996261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.182 [2024-11-18 06:51:11.996315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.182 [2024-11-18 06:51:11.996326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:19.182 [2024-11-18 06:51:11.996335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.182 [2024-11-18 06:51:11.996352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.182 [2024-11-18 06:51:11.996386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.182 [2024-11-18 06:51:11.996400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:19.182 [2024-11-18 06:51:11.996408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.182 [2024-11-18 06:51:11.996417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.182 [2024-11-18 06:51:11.996497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.182 [2024-11-18 06:51:11.996507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:19.182 [2024-11-18 06:51:11.996516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.182 [2024-11-18 06:51:11.996524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.182 [2024-11-18 06:51:11.996561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.182 [2024-11-18 06:51:11.996571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:19.182 [2024-11-18 06:51:11.996583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.182 [2024-11-18 06:51:11.996591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.182 [2024-11-18 06:51:11.996633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.182 [2024-11-18 06:51:11.996643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:19.182 [2024-11-18 06:51:11.996651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.182 [2024-11-18 06:51:11.996660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.182 [2024-11-18 06:51:11.996707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.182 [2024-11-18 06:51:11.996717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:19.182 [2024-11-18 06:51:11.996730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.182 [2024-11-18 06:51:11.996737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.183 [2024-11-18 06:51:11.996891] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 71.169 ms, result 0 00:17:19.183 00:17:19.183 00:17:19.183 06:51:12 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:19.183 06:51:12 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:19.755 06:51:12 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:20.017 [2024-11-18 06:51:12.856693] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:17:20.017 [2024-11-18 06:51:12.857053] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85475 ] 00:17:20.017 [2024-11-18 06:51:13.015145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.017 [2024-11-18 06:51:13.043345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.280 [2024-11-18 06:51:13.157120] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:20.280 [2024-11-18 06:51:13.157203] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:20.280 [2024-11-18 06:51:13.317746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.280 [2024-11-18 06:51:13.317809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:20.280 [2024-11-18 06:51:13.317824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:20.280 [2024-11-18 06:51:13.317834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.280 [2024-11-18 06:51:13.320446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.280 [2024-11-18 06:51:13.320639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:20.280 [2024-11-18 06:51:13.320661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.592 ms 00:17:20.280 [2024-11-18 06:51:13.320669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.280 [2024-11-18 06:51:13.320883] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:20.280 [2024-11-18 06:51:13.321540] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:20.280 [2024-11-18 06:51:13.321599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.280 [2024-11-18 06:51:13.321613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:20.280 [2024-11-18 06:51:13.321623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.736 ms 00:17:20.280 [2024-11-18 06:51:13.321636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.280 [2024-11-18 06:51:13.323576] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:20.280 [2024-11-18 06:51:13.327287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.280 [2024-11-18 06:51:13.327479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:20.280 [2024-11-18 06:51:13.327506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.714 ms 00:17:20.280 [2024-11-18 06:51:13.327516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.280 [2024-11-18 06:51:13.327691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.280 [2024-11-18 06:51:13.327717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:20.280 [2024-11-18 06:51:13.327727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:20.280 [2024-11-18 06:51:13.327736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.280 [2024-11-18 06:51:13.335767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.280 [2024-11-18 06:51:13.335819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:20.280 [2024-11-18 06:51:13.335834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.983 ms 00:17:20.280 [2024-11-18 06:51:13.335846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.280 [2024-11-18 06:51:13.336014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.280 [2024-11-18 06:51:13.336027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:20.280 [2024-11-18 06:51:13.336038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:17:20.280 [2024-11-18 06:51:13.336049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.280 [2024-11-18 06:51:13.336083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.280 [2024-11-18 06:51:13.336092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:20.281 [2024-11-18 06:51:13.336101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:20.281 [2024-11-18 06:51:13.336109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.281 [2024-11-18 06:51:13.336132] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:20.281 [2024-11-18 06:51:13.338146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.281 [2024-11-18 06:51:13.338337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:20.281 [2024-11-18 06:51:13.338360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.020 ms 00:17:20.281 [2024-11-18 06:51:13.338368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.281 [2024-11-18 06:51:13.338420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.281 [2024-11-18 06:51:13.338429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:20.281 [2024-11-18 06:51:13.338442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:20.281 [2024-11-18 06:51:13.338449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.281 [2024-11-18 06:51:13.338468] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:20.281 [2024-11-18 06:51:13.338488] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:20.281 [2024-11-18 06:51:13.338524] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:20.281 [2024-11-18 06:51:13.338543] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:20.281 [2024-11-18 06:51:13.338650] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:20.281 [2024-11-18 06:51:13.338663] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:20.281 [2024-11-18 06:51:13.338678] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:20.281 [2024-11-18 06:51:13.338689] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:20.281 [2024-11-18 06:51:13.338698] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:20.281 [2024-11-18 06:51:13.338707] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:20.281 [2024-11-18 06:51:13.338727] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:20.281 [2024-11-18 06:51:13.338735] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:20.281 [2024-11-18 06:51:13.338754] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:20.281 [2024-11-18 06:51:13.338766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.281 [2024-11-18 06:51:13.338774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:20.281 [2024-11-18 06:51:13.338782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:17:20.281 [2024-11-18 06:51:13.338793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.281 [2024-11-18 06:51:13.338886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.281 [2024-11-18 06:51:13.338896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:20.281 [2024-11-18 06:51:13.338908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:20.281 [2024-11-18 06:51:13.338916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.281 [2024-11-18 06:51:13.339049] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:20.281 [2024-11-18 06:51:13.339062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:20.281 [2024-11-18 06:51:13.339074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:20.281 [2024-11-18 06:51:13.339092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:20.281 [2024-11-18 06:51:13.339101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:20.281 [2024-11-18 06:51:13.339109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:20.281 [2024-11-18 06:51:13.339117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:20.281 [2024-11-18 06:51:13.339127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:20.281 [2024-11-18 06:51:13.339137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:20.281 [2024-11-18 06:51:13.339146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:20.281 [2024-11-18 06:51:13.339154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:20.281 [2024-11-18 06:51:13.339162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:20.281 [2024-11-18 06:51:13.339170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:20.281 [2024-11-18 06:51:13.339178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:20.281 [2024-11-18 06:51:13.339186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:20.281 [2024-11-18 06:51:13.339194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:20.281 [2024-11-18 06:51:13.339202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:20.281 [2024-11-18 06:51:13.339213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:20.281 [2024-11-18 06:51:13.339221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:20.281 [2024-11-18 06:51:13.339230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:20.281 [2024-11-18 06:51:13.339239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:20.281 [2024-11-18 06:51:13.339247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:20.281 [2024-11-18 06:51:13.339255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:20.281 [2024-11-18 06:51:13.339268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:20.281 [2024-11-18 06:51:13.339276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:20.281 [2024-11-18 06:51:13.339284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:20.281 [2024-11-18 06:51:13.339292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:20.281 [2024-11-18 06:51:13.339299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:20.281 [2024-11-18 06:51:13.339308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:20.281 [2024-11-18 06:51:13.339316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:20.281 [2024-11-18 06:51:13.339323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:20.281 [2024-11-18 06:51:13.339331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:20.281 [2024-11-18 06:51:13.339338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:20.281 [2024-11-18 06:51:13.339346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:20.281 [2024-11-18 06:51:13.339354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:20.281 [2024-11-18 06:51:13.339362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:20.281 [2024-11-18 06:51:13.339369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:20.281 [2024-11-18 06:51:13.339377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:20.281 [2024-11-18 06:51:13.339385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:20.281 [2024-11-18 06:51:13.339395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:20.281 [2024-11-18 06:51:13.339402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:20.281 [2024-11-18 06:51:13.339408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:20.281 [2024-11-18 06:51:13.339415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:20.281 [2024-11-18 06:51:13.339422] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:20.281 [2024-11-18 06:51:13.339434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:20.281 [2024-11-18 06:51:13.339445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:20.281 [2024-11-18 06:51:13.339460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:20.281 [2024-11-18 06:51:13.339467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:20.281 [2024-11-18 06:51:13.339474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:20.281 [2024-11-18 06:51:13.339483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:20.281 [2024-11-18 06:51:13.339490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:20.281 [2024-11-18 06:51:13.339498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:20.281 [2024-11-18 06:51:13.339505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:20.281 [2024-11-18 06:51:13.339514] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:20.281 [2024-11-18 06:51:13.339527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:20.281 [2024-11-18 06:51:13.339538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:20.281 [2024-11-18 06:51:13.339547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:20.281 [2024-11-18 06:51:13.339554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:20.281 [2024-11-18 06:51:13.339561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:20.281 [2024-11-18 06:51:13.339568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:20.281 [2024-11-18 06:51:13.339576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:20.281 [2024-11-18 06:51:13.339583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:20.282 [2024-11-18 06:51:13.339591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:20.282 [2024-11-18 06:51:13.339598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:20.282 [2024-11-18 06:51:13.339605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:20.282 [2024-11-18 06:51:13.339614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:20.282 [2024-11-18 06:51:13.339621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:20.282 [2024-11-18 06:51:13.339629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:20.282 [2024-11-18 06:51:13.339636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:20.282 [2024-11-18 06:51:13.339643] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:20.282 [2024-11-18 06:51:13.339653] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:20.282 [2024-11-18 06:51:13.339666] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:20.282 [2024-11-18 06:51:13.339673] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:20.282 [2024-11-18 06:51:13.339680] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:20.282 [2024-11-18 06:51:13.339687] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:20.282 [2024-11-18 06:51:13.339695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.282 [2024-11-18 06:51:13.339702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:20.282 [2024-11-18 06:51:13.339710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:17:20.282 [2024-11-18 06:51:13.339722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.282 [2024-11-18 06:51:13.354074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.282 [2024-11-18 06:51:13.354117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:20.282 [2024-11-18 06:51:13.354129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.300 ms 00:17:20.282 [2024-11-18 06:51:13.354137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.282 [2024-11-18 06:51:13.354281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.282 [2024-11-18 06:51:13.354296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:20.282 [2024-11-18 06:51:13.354305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:20.282 [2024-11-18 06:51:13.354312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.544 [2024-11-18 06:51:13.378458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.544 [2024-11-18 06:51:13.378530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:20.544 [2024-11-18 06:51:13.378550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.121 ms 00:17:20.544 [2024-11-18 06:51:13.378562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.544 [2024-11-18 06:51:13.378689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.544 [2024-11-18 06:51:13.378730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:20.544 [2024-11-18 06:51:13.378745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:20.544 [2024-11-18 06:51:13.378756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.544 [2024-11-18 06:51:13.379390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.544 [2024-11-18 06:51:13.379437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:20.544 [2024-11-18 06:51:13.379460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:17:20.544 [2024-11-18 06:51:13.379473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.544 [2024-11-18 06:51:13.379693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.544 [2024-11-18 06:51:13.379708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:20.544 [2024-11-18 06:51:13.379729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:17:20.544 [2024-11-18 06:51:13.379742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.544 [2024-11-18 06:51:13.388932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.544 [2024-11-18 06:51:13.388999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:20.544 [2024-11-18 06:51:13.389017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.158 ms 00:17:20.544 [2024-11-18 06:51:13.389025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.544 [2024-11-18 06:51:13.392896] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:20.544 [2024-11-18 06:51:13.392945] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:20.544 [2024-11-18 06:51:13.392957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.544 [2024-11-18 06:51:13.392965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:20.544 [2024-11-18 06:51:13.392987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.820 ms 00:17:20.544 [2024-11-18 06:51:13.392995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.544 [2024-11-18 06:51:13.408601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.544 [2024-11-18 06:51:13.408663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:20.544 [2024-11-18 06:51:13.408674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.545 ms 00:17:20.544 [2024-11-18 06:51:13.408682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.544 [2024-11-18 06:51:13.411578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.544 [2024-11-18 06:51:13.411624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:20.544 [2024-11-18 06:51:13.411634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.801 ms 00:17:20.544 [2024-11-18 06:51:13.411641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.544 [2024-11-18 06:51:13.414239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.544 [2024-11-18 06:51:13.414406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:20.544 [2024-11-18 06:51:13.414424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.535 ms 00:17:20.544 [2024-11-18 06:51:13.414432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.544 [2024-11-18 06:51:13.414790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.544 [2024-11-18 06:51:13.414805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:20.544 [2024-11-18 06:51:13.414815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:17:20.544 [2024-11-18 06:51:13.414824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.544 [2024-11-18 06:51:13.438457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.544 [2024-11-18 06:51:13.438522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:20.544 [2024-11-18 06:51:13.438536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.609 ms 00:17:20.544 [2024-11-18 06:51:13.438556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.544 [2024-11-18 06:51:13.447003] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:20.544 [2024-11-18 06:51:13.465938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.544 [2024-11-18 06:51:13.466168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:20.544 [2024-11-18 06:51:13.466190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.279 ms 00:17:20.544 [2024-11-18 06:51:13.466199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.544 [2024-11-18 06:51:13.466298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.544 [2024-11-18 06:51:13.466310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:20.544 [2024-11-18 06:51:13.466320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:20.544 [2024-11-18 06:51:13.466332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.544 [2024-11-18 06:51:13.466386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.544 [2024-11-18 06:51:13.466402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:20.544 [2024-11-18 06:51:13.466411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:20.544 [2024-11-18 06:51:13.466419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.544 [2024-11-18 06:51:13.466441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.544 [2024-11-18 06:51:13.466453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:20.544 [2024-11-18 06:51:13.466462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:20.544 [2024-11-18 06:51:13.466470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.544 [2024-11-18 06:51:13.466510] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:20.544 [2024-11-18 06:51:13.466521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.544 [2024-11-18 06:51:13.466529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:20.544 [2024-11-18 06:51:13.466537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:20.545 [2024-11-18 06:51:13.466545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.545 [2024-11-18 06:51:13.472339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.545 [2024-11-18 06:51:13.472388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:20.545 [2024-11-18 06:51:13.472399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.773 ms 00:17:20.545 [2024-11-18 06:51:13.472416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.545 [2024-11-18 06:51:13.472513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.545 [2024-11-18 06:51:13.472525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:20.545 [2024-11-18 06:51:13.472538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:17:20.545 [2024-11-18 06:51:13.472549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.545 [2024-11-18 06:51:13.473598] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:20.545 [2024-11-18 06:51:13.474912] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 155.529 ms, result 0 00:17:20.545 [2024-11-18 06:51:13.476144] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:20.545 [2024-11-18 06:51:13.483562] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:20.813  [2024-11-18T06:51:13.900Z] Copying: 4096/4096 [kB] (average 14 MBps)[2024-11-18 06:51:13.752031] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:20.813 [2024-11-18 06:51:13.753034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.813 [2024-11-18 06:51:13.753079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:20.813 [2024-11-18 06:51:13.753091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:20.813 [2024-11-18 06:51:13.753099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.813 [2024-11-18 06:51:13.753120] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:20.813 [2024-11-18 06:51:13.753769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.813 [2024-11-18 06:51:13.753806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:20.813 [2024-11-18 06:51:13.753817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.637 ms 00:17:20.813 [2024-11-18 06:51:13.753826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.813 [2024-11-18 06:51:13.755841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.813 [2024-11-18 06:51:13.755886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:20.813 [2024-11-18 06:51:13.755896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.989 ms 00:17:20.813 [2024-11-18 06:51:13.755911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.813 [2024-11-18 06:51:13.760354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.813 [2024-11-18 06:51:13.760391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:20.813 [2024-11-18 06:51:13.760401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.426 ms 00:17:20.813 [2024-11-18 06:51:13.760409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.813 [2024-11-18 06:51:13.767384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.813 [2024-11-18 06:51:13.767425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:20.813 [2024-11-18 06:51:13.767435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.934 ms 00:17:20.813 [2024-11-18 06:51:13.767456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.813 [2024-11-18 06:51:13.770290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.813 [2024-11-18 06:51:13.770472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:20.813 [2024-11-18 06:51:13.770491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.791 ms 00:17:20.813 [2024-11-18 06:51:13.770499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.813 [2024-11-18 06:51:13.775524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.813 [2024-11-18 06:51:13.775584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:20.813 [2024-11-18 06:51:13.775594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.931 ms 00:17:20.813 [2024-11-18 06:51:13.775603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.813 [2024-11-18 06:51:13.775732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.813 [2024-11-18 06:51:13.775743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:20.813 [2024-11-18 06:51:13.775751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:20.813 [2024-11-18 06:51:13.775763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.813 [2024-11-18 06:51:13.778677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.813 [2024-11-18 06:51:13.778744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:20.813 [2024-11-18 06:51:13.778754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.896 ms 00:17:20.813 [2024-11-18 06:51:13.778761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.813 [2024-11-18 06:51:13.781445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.813 [2024-11-18 06:51:13.781609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:20.813 [2024-11-18 06:51:13.781625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.643 ms 00:17:20.813 [2024-11-18 06:51:13.781633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.813 [2024-11-18 06:51:13.783933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.813 [2024-11-18 06:51:13.783995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:20.813 [2024-11-18 06:51:13.784005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.262 ms 00:17:20.813 [2024-11-18 06:51:13.784012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.813 [2024-11-18 06:51:13.786130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.813 [2024-11-18 06:51:13.786174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:20.813 [2024-11-18 06:51:13.786183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.048 ms 00:17:20.813 [2024-11-18 06:51:13.786189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.813 [2024-11-18 06:51:13.786229] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:20.813 [2024-11-18 06:51:13.786244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:20.813 [2024-11-18 06:51:13.786254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:20.813 [2024-11-18 06:51:13.786262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:20.813 [2024-11-18 06:51:13.786269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:20.813 [2024-11-18 06:51:13.786276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:20.813 [2024-11-18 06:51:13.786284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:20.813 [2024-11-18 06:51:13.786291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:20.813 [2024-11-18 06:51:13.786299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:20.813 [2024-11-18 06:51:13.786306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:20.813 [2024-11-18 06:51:13.786315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:20.813 [2024-11-18 06:51:13.786322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:20.813 [2024-11-18 06:51:13.786330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:20.813 [2024-11-18 06:51:13.786337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:20.813 [2024-11-18 06:51:13.786345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:20.813 [2024-11-18 06:51:13.786352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:20.813 [2024-11-18 06:51:13.786359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:20.813 [2024-11-18 06:51:13.786366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:20.813 [2024-11-18 06:51:13.786373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:20.813 [2024-11-18 06:51:13.786381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:20.813 [2024-11-18 06:51:13.786388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:20.813 [2024-11-18 06:51:13.786395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:20.813 [2024-11-18 06:51:13.786403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:20.813 [2024-11-18 06:51:13.786411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:20.813 [2024-11-18 06:51:13.786418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:20.813 [2024-11-18 06:51:13.786425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:20.813 [2024-11-18 06:51:13.786432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:20.813 [2024-11-18 06:51:13.786442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:20.813 [2024-11-18 06:51:13.786450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:20.813 [2024-11-18 06:51:13.786458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:20.813 [2024-11-18 06:51:13.786465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:20.813 [2024-11-18 06:51:13.786472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.786967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.787001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.787011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.787018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.787026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.787043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.787051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:20.814 [2024-11-18 06:51:13.787067] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:20.814 [2024-11-18 06:51:13.787075] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ec0999a9-9a7d-450e-b3a6-a004ddc4ed37 00:17:20.814 [2024-11-18 06:51:13.787083] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:20.814 [2024-11-18 06:51:13.787091] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:20.814 [2024-11-18 06:51:13.787103] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:20.814 [2024-11-18 06:51:13.787112] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:20.814 [2024-11-18 06:51:13.787119] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:20.814 [2024-11-18 06:51:13.787128] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:20.814 [2024-11-18 06:51:13.787139] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:20.814 [2024-11-18 06:51:13.787147] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:20.814 [2024-11-18 06:51:13.787154] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:20.814 [2024-11-18 06:51:13.787161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.814 [2024-11-18 06:51:13.787169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:20.814 [2024-11-18 06:51:13.787178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.933 ms 00:17:20.814 [2024-11-18 06:51:13.787186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.814 [2024-11-18 06:51:13.789115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.814 [2024-11-18 06:51:13.789145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:20.814 [2024-11-18 06:51:13.789156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.904 ms 00:17:20.814 [2024-11-18 06:51:13.789175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.814 [2024-11-18 06:51:13.789287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.814 [2024-11-18 06:51:13.789296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:20.814 [2024-11-18 06:51:13.789306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:17:20.814 [2024-11-18 06:51:13.789315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.814 [2024-11-18 06:51:13.797111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.814 [2024-11-18 06:51:13.797157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:20.815 [2024-11-18 06:51:13.797168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.815 [2024-11-18 06:51:13.797182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.815 [2024-11-18 06:51:13.797258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.815 [2024-11-18 06:51:13.797267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:20.815 [2024-11-18 06:51:13.797276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.815 [2024-11-18 06:51:13.797288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.815 [2024-11-18 06:51:13.797334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.815 [2024-11-18 06:51:13.797344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:20.815 [2024-11-18 06:51:13.797352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.815 [2024-11-18 06:51:13.797360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.815 [2024-11-18 06:51:13.797380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.815 [2024-11-18 06:51:13.797388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:20.815 [2024-11-18 06:51:13.797395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.815 [2024-11-18 06:51:13.797403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.815 [2024-11-18 06:51:13.811353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.815 [2024-11-18 06:51:13.811407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:20.815 [2024-11-18 06:51:13.811420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.815 [2024-11-18 06:51:13.811428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.815 [2024-11-18 06:51:13.822386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.815 [2024-11-18 06:51:13.822444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:20.815 [2024-11-18 06:51:13.822467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.815 [2024-11-18 06:51:13.822475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.815 [2024-11-18 06:51:13.822530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.815 [2024-11-18 06:51:13.822540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:20.815 [2024-11-18 06:51:13.822549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.815 [2024-11-18 06:51:13.822561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.815 [2024-11-18 06:51:13.822596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.815 [2024-11-18 06:51:13.822609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:20.815 [2024-11-18 06:51:13.822620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.815 [2024-11-18 06:51:13.822629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.815 [2024-11-18 06:51:13.822708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.815 [2024-11-18 06:51:13.822732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:20.815 [2024-11-18 06:51:13.822741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.815 [2024-11-18 06:51:13.822749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.815 [2024-11-18 06:51:13.822781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.815 [2024-11-18 06:51:13.822791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:20.815 [2024-11-18 06:51:13.822803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.815 [2024-11-18 06:51:13.822812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.815 [2024-11-18 06:51:13.822853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.815 [2024-11-18 06:51:13.822866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:20.815 [2024-11-18 06:51:13.822875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.815 [2024-11-18 06:51:13.822883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.815 [2024-11-18 06:51:13.822931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.815 [2024-11-18 06:51:13.822944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:20.815 [2024-11-18 06:51:13.822953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.815 [2024-11-18 06:51:13.822961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.815 [2024-11-18 06:51:13.823144] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.074 ms, result 0 00:17:21.076 00:17:21.076 00:17:21.076 06:51:14 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=85495 00:17:21.076 06:51:14 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 85495 00:17:21.076 06:51:14 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 85495 ']' 00:17:21.076 06:51:14 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.076 06:51:14 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:21.076 06:51:14 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:21.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.076 06:51:14 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.076 06:51:14 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:21.076 06:51:14 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:21.076 [2024-11-18 06:51:14.126563] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:17:21.076 [2024-11-18 06:51:14.126704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85495 ] 00:17:21.337 [2024-11-18 06:51:14.286192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.337 [2024-11-18 06:51:14.314703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.910 06:51:14 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:21.910 06:51:14 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:17:21.910 06:51:14 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:22.170 [2024-11-18 06:51:15.184285] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:22.170 [2024-11-18 06:51:15.184360] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:22.433 [2024-11-18 06:51:15.361327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.433 [2024-11-18 06:51:15.361386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:22.433 [2024-11-18 06:51:15.361401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:22.433 [2024-11-18 06:51:15.361412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.433 [2024-11-18 06:51:15.363933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.433 [2024-11-18 06:51:15.364149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:22.433 [2024-11-18 06:51:15.364172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.501 ms 00:17:22.433 [2024-11-18 06:51:15.364186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.433 [2024-11-18 06:51:15.364389] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:22.433 [2024-11-18 06:51:15.364699] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:22.433 [2024-11-18 06:51:15.364721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.433 [2024-11-18 06:51:15.364733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:22.433 [2024-11-18 06:51:15.364747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:17:22.433 [2024-11-18 06:51:15.364759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.433 [2024-11-18 06:51:15.366445] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:22.433 [2024-11-18 06:51:15.370161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.433 [2024-11-18 06:51:15.370212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:22.433 [2024-11-18 06:51:15.370228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.714 ms 00:17:22.433 [2024-11-18 06:51:15.370237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.433 [2024-11-18 06:51:15.370320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.433 [2024-11-18 06:51:15.370332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:22.433 [2024-11-18 06:51:15.370347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:22.433 [2024-11-18 06:51:15.370360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.433 [2024-11-18 06:51:15.378254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.433 [2024-11-18 06:51:15.378293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:22.433 [2024-11-18 06:51:15.378305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.832 ms 00:17:22.433 [2024-11-18 06:51:15.378313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.433 [2024-11-18 06:51:15.378424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.433 [2024-11-18 06:51:15.378433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:22.433 [2024-11-18 06:51:15.378446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:17:22.433 [2024-11-18 06:51:15.378456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.433 [2024-11-18 06:51:15.378483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.433 [2024-11-18 06:51:15.378492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:22.433 [2024-11-18 06:51:15.378505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:22.433 [2024-11-18 06:51:15.378512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.433 [2024-11-18 06:51:15.378536] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:22.433 [2024-11-18 06:51:15.380656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.433 [2024-11-18 06:51:15.380698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:22.433 [2024-11-18 06:51:15.380708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.127 ms 00:17:22.433 [2024-11-18 06:51:15.380727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.433 [2024-11-18 06:51:15.380765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.433 [2024-11-18 06:51:15.380775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:22.433 [2024-11-18 06:51:15.380783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:22.434 [2024-11-18 06:51:15.380793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.434 [2024-11-18 06:51:15.380814] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:22.434 [2024-11-18 06:51:15.380835] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:22.434 [2024-11-18 06:51:15.380877] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:22.434 [2024-11-18 06:51:15.380898] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:22.434 [2024-11-18 06:51:15.381024] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:22.434 [2024-11-18 06:51:15.381044] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:22.434 [2024-11-18 06:51:15.381055] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:22.434 [2024-11-18 06:51:15.381069] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:22.434 [2024-11-18 06:51:15.381080] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:22.434 [2024-11-18 06:51:15.381093] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:22.434 [2024-11-18 06:51:15.381101] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:22.434 [2024-11-18 06:51:15.381114] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:22.434 [2024-11-18 06:51:15.381124] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:22.434 [2024-11-18 06:51:15.381137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.434 [2024-11-18 06:51:15.381144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:22.434 [2024-11-18 06:51:15.381154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:17:22.434 [2024-11-18 06:51:15.381164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.434 [2024-11-18 06:51:15.381256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.434 [2024-11-18 06:51:15.381265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:22.434 [2024-11-18 06:51:15.381274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:22.434 [2024-11-18 06:51:15.381281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.434 [2024-11-18 06:51:15.381387] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:22.434 [2024-11-18 06:51:15.381397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:22.434 [2024-11-18 06:51:15.381408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:22.434 [2024-11-18 06:51:15.381418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.434 [2024-11-18 06:51:15.381431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:22.434 [2024-11-18 06:51:15.381439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:22.434 [2024-11-18 06:51:15.381449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:22.434 [2024-11-18 06:51:15.381463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:22.434 [2024-11-18 06:51:15.381473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:22.434 [2024-11-18 06:51:15.381480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:22.434 [2024-11-18 06:51:15.381490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:22.434 [2024-11-18 06:51:15.381498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:22.434 [2024-11-18 06:51:15.381508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:22.434 [2024-11-18 06:51:15.381516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:22.434 [2024-11-18 06:51:15.381526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:22.434 [2024-11-18 06:51:15.381534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.434 [2024-11-18 06:51:15.381543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:22.434 [2024-11-18 06:51:15.381551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:22.434 [2024-11-18 06:51:15.381560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.434 [2024-11-18 06:51:15.381569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:22.434 [2024-11-18 06:51:15.381580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:22.434 [2024-11-18 06:51:15.381590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:22.434 [2024-11-18 06:51:15.381600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:22.434 [2024-11-18 06:51:15.381608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:22.434 [2024-11-18 06:51:15.381619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:22.434 [2024-11-18 06:51:15.381627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:22.434 [2024-11-18 06:51:15.381637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:22.434 [2024-11-18 06:51:15.381645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:22.434 [2024-11-18 06:51:15.381655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:22.434 [2024-11-18 06:51:15.381664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:22.434 [2024-11-18 06:51:15.381673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:22.434 [2024-11-18 06:51:15.381680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:22.434 [2024-11-18 06:51:15.381688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:22.434 [2024-11-18 06:51:15.381695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:22.434 [2024-11-18 06:51:15.381703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:22.434 [2024-11-18 06:51:15.381710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:22.434 [2024-11-18 06:51:15.381720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:22.434 [2024-11-18 06:51:15.381727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:22.434 [2024-11-18 06:51:15.381736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:22.434 [2024-11-18 06:51:15.381742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.434 [2024-11-18 06:51:15.381751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:22.434 [2024-11-18 06:51:15.381758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:22.434 [2024-11-18 06:51:15.381766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.434 [2024-11-18 06:51:15.381772] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:22.434 [2024-11-18 06:51:15.381784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:22.434 [2024-11-18 06:51:15.381792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:22.434 [2024-11-18 06:51:15.381802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.434 [2024-11-18 06:51:15.381809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:22.434 [2024-11-18 06:51:15.381818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:22.434 [2024-11-18 06:51:15.381825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:22.434 [2024-11-18 06:51:15.381835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:22.434 [2024-11-18 06:51:15.381841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:22.434 [2024-11-18 06:51:15.381851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:22.434 [2024-11-18 06:51:15.381861] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:22.434 [2024-11-18 06:51:15.381875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:22.434 [2024-11-18 06:51:15.381884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:22.434 [2024-11-18 06:51:15.381902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:22.434 [2024-11-18 06:51:15.381909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:22.434 [2024-11-18 06:51:15.381918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:22.434 [2024-11-18 06:51:15.381925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:22.434 [2024-11-18 06:51:15.381940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:22.434 [2024-11-18 06:51:15.381947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:22.434 [2024-11-18 06:51:15.381956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:22.434 [2024-11-18 06:51:15.381963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:22.434 [2024-11-18 06:51:15.382001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:22.434 [2024-11-18 06:51:15.382010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:22.434 [2024-11-18 06:51:15.382019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:22.434 [2024-11-18 06:51:15.382026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:22.434 [2024-11-18 06:51:15.382039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:22.434 [2024-11-18 06:51:15.382046] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:22.434 [2024-11-18 06:51:15.382058] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:22.434 [2024-11-18 06:51:15.382066] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:22.434 [2024-11-18 06:51:15.382076] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:22.434 [2024-11-18 06:51:15.382084] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:22.434 [2024-11-18 06:51:15.382093] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:22.435 [2024-11-18 06:51:15.382102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.435 [2024-11-18 06:51:15.382112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:22.435 [2024-11-18 06:51:15.382120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.787 ms 00:17:22.435 [2024-11-18 06:51:15.382129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.435 [2024-11-18 06:51:15.395769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.435 [2024-11-18 06:51:15.395817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:22.435 [2024-11-18 06:51:15.395829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.581 ms 00:17:22.435 [2024-11-18 06:51:15.395840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.435 [2024-11-18 06:51:15.395972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.435 [2024-11-18 06:51:15.396011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:22.435 [2024-11-18 06:51:15.396020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:22.435 [2024-11-18 06:51:15.396030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.435 [2024-11-18 06:51:15.408337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.435 [2024-11-18 06:51:15.408389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:22.435 [2024-11-18 06:51:15.408400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.285 ms 00:17:22.435 [2024-11-18 06:51:15.408410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.435 [2024-11-18 06:51:15.408483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.435 [2024-11-18 06:51:15.408495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:22.435 [2024-11-18 06:51:15.408504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:22.435 [2024-11-18 06:51:15.408514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.435 [2024-11-18 06:51:15.409022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.435 [2024-11-18 06:51:15.409053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:22.435 [2024-11-18 06:51:15.409069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:17:22.435 [2024-11-18 06:51:15.409081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.435 [2024-11-18 06:51:15.409237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.435 [2024-11-18 06:51:15.409266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:22.435 [2024-11-18 06:51:15.409277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:17:22.435 [2024-11-18 06:51:15.409288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.435 [2024-11-18 06:51:15.417337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.435 [2024-11-18 06:51:15.417384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:22.435 [2024-11-18 06:51:15.417394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.025 ms 00:17:22.435 [2024-11-18 06:51:15.417404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.435 [2024-11-18 06:51:15.421131] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:22.435 [2024-11-18 06:51:15.421181] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:22.435 [2024-11-18 06:51:15.421198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.435 [2024-11-18 06:51:15.421208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:22.435 [2024-11-18 06:51:15.421217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.698 ms 00:17:22.435 [2024-11-18 06:51:15.421227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.435 [2024-11-18 06:51:15.439629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.435 [2024-11-18 06:51:15.439684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:22.435 [2024-11-18 06:51:15.439697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.345 ms 00:17:22.435 [2024-11-18 06:51:15.439709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.435 [2024-11-18 06:51:15.442526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.435 [2024-11-18 06:51:15.442710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:22.435 [2024-11-18 06:51:15.442751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.730 ms 00:17:22.435 [2024-11-18 06:51:15.442762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.435 [2024-11-18 06:51:15.445496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.435 [2024-11-18 06:51:15.445549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:22.435 [2024-11-18 06:51:15.445559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.589 ms 00:17:22.435 [2024-11-18 06:51:15.445568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.435 [2024-11-18 06:51:15.445907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.435 [2024-11-18 06:51:15.445937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:22.435 [2024-11-18 06:51:15.445948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:17:22.435 [2024-11-18 06:51:15.445958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.435 [2024-11-18 06:51:15.479195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.435 [2024-11-18 06:51:15.479268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:22.435 [2024-11-18 06:51:15.479290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.199 ms 00:17:22.435 [2024-11-18 06:51:15.479307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.435 [2024-11-18 06:51:15.487446] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:22.435 [2024-11-18 06:51:15.506239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.435 [2024-11-18 06:51:15.506292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:22.435 [2024-11-18 06:51:15.506308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.827 ms 00:17:22.435 [2024-11-18 06:51:15.506317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.435 [2024-11-18 06:51:15.506412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.435 [2024-11-18 06:51:15.506424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:22.435 [2024-11-18 06:51:15.506439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:22.435 [2024-11-18 06:51:15.506447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.435 [2024-11-18 06:51:15.506505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.435 [2024-11-18 06:51:15.506518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:22.435 [2024-11-18 06:51:15.506529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:22.435 [2024-11-18 06:51:15.506537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.435 [2024-11-18 06:51:15.506566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.435 [2024-11-18 06:51:15.506575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:22.435 [2024-11-18 06:51:15.506588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:22.435 [2024-11-18 06:51:15.506605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.435 [2024-11-18 06:51:15.506642] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:22.435 [2024-11-18 06:51:15.506656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.435 [2024-11-18 06:51:15.506666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:22.435 [2024-11-18 06:51:15.506674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:22.435 [2024-11-18 06:51:15.506683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.435 [2024-11-18 06:51:15.512869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.435 [2024-11-18 06:51:15.513081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:22.435 [2024-11-18 06:51:15.513101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.162 ms 00:17:22.435 [2024-11-18 06:51:15.513114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.435 [2024-11-18 06:51:15.513670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.435 [2024-11-18 06:51:15.513720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:22.435 [2024-11-18 06:51:15.513733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:22.435 [2024-11-18 06:51:15.513745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.435 [2024-11-18 06:51:15.514804] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:22.697 [2024-11-18 06:51:15.516190] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 153.161 ms, result 0 00:17:22.697 [2024-11-18 06:51:15.518775] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:22.697 Some configs were skipped because the RPC state that can call them passed over. 00:17:22.697 06:51:15 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:22.697 [2024-11-18 06:51:15.747791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.697 [2024-11-18 06:51:15.747961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:22.697 [2024-11-18 06:51:15.748050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.833 ms 00:17:22.697 [2024-11-18 06:51:15.748076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.697 [2024-11-18 06:51:15.748137] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.185 ms, result 0 00:17:22.697 true 00:17:22.697 06:51:15 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:22.958 [2024-11-18 06:51:15.963894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.958 [2024-11-18 06:51:15.964341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:22.958 [2024-11-18 06:51:15.964550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.666 ms 00:17:22.958 [2024-11-18 06:51:15.964626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.958 [2024-11-18 06:51:15.964892] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.636 ms, result 0 00:17:22.958 true 00:17:22.958 06:51:15 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 85495 00:17:22.958 06:51:15 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 85495 ']' 00:17:22.958 06:51:15 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 85495 00:17:22.958 06:51:15 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:17:22.958 06:51:15 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:22.958 06:51:15 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85495 00:17:22.958 killing process with pid 85495 00:17:22.958 06:51:16 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:22.958 06:51:16 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:22.958 06:51:16 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85495' 00:17:22.958 06:51:16 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 85495 00:17:22.958 06:51:16 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 85495 00:17:23.221 [2024-11-18 06:51:16.203449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.221 [2024-11-18 06:51:16.203521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:23.221 [2024-11-18 06:51:16.203539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:23.221 [2024-11-18 06:51:16.203548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.221 [2024-11-18 06:51:16.203577] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:23.221 [2024-11-18 06:51:16.204444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.221 [2024-11-18 06:51:16.204480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:23.221 [2024-11-18 06:51:16.204494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.844 ms 00:17:23.221 [2024-11-18 06:51:16.204507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.221 [2024-11-18 06:51:16.204842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.221 [2024-11-18 06:51:16.204869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:23.221 [2024-11-18 06:51:16.204879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:17:23.221 [2024-11-18 06:51:16.204891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.221 [2024-11-18 06:51:16.209908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.221 [2024-11-18 06:51:16.209955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:23.221 [2024-11-18 06:51:16.209967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.996 ms 00:17:23.221 [2024-11-18 06:51:16.209995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.221 [2024-11-18 06:51:16.217145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.221 [2024-11-18 06:51:16.217405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:23.221 [2024-11-18 06:51:16.217426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.083 ms 00:17:23.221 [2024-11-18 06:51:16.217439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.221 [2024-11-18 06:51:16.220357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.221 [2024-11-18 06:51:16.220411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:23.221 [2024-11-18 06:51:16.220422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.847 ms 00:17:23.221 [2024-11-18 06:51:16.220432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.221 [2024-11-18 06:51:16.226188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.221 [2024-11-18 06:51:16.226244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:23.221 [2024-11-18 06:51:16.226256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.688 ms 00:17:23.221 [2024-11-18 06:51:16.226269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.221 [2024-11-18 06:51:16.226423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.221 [2024-11-18 06:51:16.226438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:23.221 [2024-11-18 06:51:16.226450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:17:23.221 [2024-11-18 06:51:16.226462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.221 [2024-11-18 06:51:16.230165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.221 [2024-11-18 06:51:16.230222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:23.221 [2024-11-18 06:51:16.230234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.682 ms 00:17:23.222 [2024-11-18 06:51:16.230250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.222 [2024-11-18 06:51:16.233534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.222 [2024-11-18 06:51:16.233591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:23.222 [2024-11-18 06:51:16.233602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.234 ms 00:17:23.222 [2024-11-18 06:51:16.233612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.222 [2024-11-18 06:51:16.235844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.222 [2024-11-18 06:51:16.236054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:23.222 [2024-11-18 06:51:16.236073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.165 ms 00:17:23.222 [2024-11-18 06:51:16.236083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.222 [2024-11-18 06:51:16.239071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.222 [2024-11-18 06:51:16.239285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:23.222 [2024-11-18 06:51:16.239306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.487 ms 00:17:23.222 [2024-11-18 06:51:16.239317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.222 [2024-11-18 06:51:16.239472] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:23.222 [2024-11-18 06:51:16.239517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.239969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.240007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.240014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.240027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.240036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.240047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.240054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.240067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.240075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.240088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:23.222 [2024-11-18 06:51:16.240106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:23.223 [2024-11-18 06:51:16.240529] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:23.223 [2024-11-18 06:51:16.240538] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ec0999a9-9a7d-450e-b3a6-a004ddc4ed37 00:17:23.223 [2024-11-18 06:51:16.240549] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:23.223 [2024-11-18 06:51:16.240559] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:23.223 [2024-11-18 06:51:16.240569] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:23.223 [2024-11-18 06:51:16.240579] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:23.223 [2024-11-18 06:51:16.240588] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:23.223 [2024-11-18 06:51:16.240605] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:23.223 [2024-11-18 06:51:16.240615] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:23.223 [2024-11-18 06:51:16.240622] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:23.223 [2024-11-18 06:51:16.240632] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:23.223 [2024-11-18 06:51:16.240641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.223 [2024-11-18 06:51:16.240652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:23.223 [2024-11-18 06:51:16.240660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.173 ms 00:17:23.223 [2024-11-18 06:51:16.240673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.223 [2024-11-18 06:51:16.243829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.223 [2024-11-18 06:51:16.243885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:23.223 [2024-11-18 06:51:16.243897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.113 ms 00:17:23.223 [2024-11-18 06:51:16.243910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.223 [2024-11-18 06:51:16.244095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.223 [2024-11-18 06:51:16.244120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:23.223 [2024-11-18 06:51:16.244130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:17:23.223 [2024-11-18 06:51:16.244140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.223 [2024-11-18 06:51:16.254847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.223 [2024-11-18 06:51:16.255096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:23.223 [2024-11-18 06:51:16.255118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.223 [2024-11-18 06:51:16.255130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.223 [2024-11-18 06:51:16.255244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.223 [2024-11-18 06:51:16.255260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:23.223 [2024-11-18 06:51:16.255270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.223 [2024-11-18 06:51:16.255284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.223 [2024-11-18 06:51:16.255343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.223 [2024-11-18 06:51:16.255357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:23.223 [2024-11-18 06:51:16.255367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.223 [2024-11-18 06:51:16.255378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.223 [2024-11-18 06:51:16.255398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.224 [2024-11-18 06:51:16.255409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:23.224 [2024-11-18 06:51:16.255419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.224 [2024-11-18 06:51:16.255429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.224 [2024-11-18 06:51:16.274552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.224 [2024-11-18 06:51:16.274786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:23.224 [2024-11-18 06:51:16.274806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.224 [2024-11-18 06:51:16.274817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.224 [2024-11-18 06:51:16.289413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.224 [2024-11-18 06:51:16.289634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:23.224 [2024-11-18 06:51:16.289653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.224 [2024-11-18 06:51:16.289668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.224 [2024-11-18 06:51:16.289744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.224 [2024-11-18 06:51:16.289761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:23.224 [2024-11-18 06:51:16.289770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.224 [2024-11-18 06:51:16.289781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.224 [2024-11-18 06:51:16.289819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.224 [2024-11-18 06:51:16.289832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:23.224 [2024-11-18 06:51:16.289841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.224 [2024-11-18 06:51:16.289852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.224 [2024-11-18 06:51:16.289947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.224 [2024-11-18 06:51:16.289961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:23.224 [2024-11-18 06:51:16.290016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.224 [2024-11-18 06:51:16.290027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.224 [2024-11-18 06:51:16.290068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.224 [2024-11-18 06:51:16.290083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:23.224 [2024-11-18 06:51:16.290093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.224 [2024-11-18 06:51:16.290106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.224 [2024-11-18 06:51:16.290162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.224 [2024-11-18 06:51:16.290178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:23.224 [2024-11-18 06:51:16.290191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.224 [2024-11-18 06:51:16.290204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.224 [2024-11-18 06:51:16.290271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.224 [2024-11-18 06:51:16.290285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:23.224 [2024-11-18 06:51:16.290295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.224 [2024-11-18 06:51:16.290311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.224 [2024-11-18 06:51:16.290492] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 87.007 ms, result 0 00:17:23.797 06:51:16 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:23.797 [2024-11-18 06:51:16.666885] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:17:23.797 [2024-11-18 06:51:16.667047] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85531 ] 00:17:23.797 [2024-11-18 06:51:16.829917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.797 [2024-11-18 06:51:16.869612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.058 [2024-11-18 06:51:17.021819] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:24.058 [2024-11-18 06:51:17.021918] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:24.322 [2024-11-18 06:51:17.186922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.322 [2024-11-18 06:51:17.187009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:24.322 [2024-11-18 06:51:17.187027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:24.322 [2024-11-18 06:51:17.187037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.322 [2024-11-18 06:51:17.189930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.322 [2024-11-18 06:51:17.190007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:24.322 [2024-11-18 06:51:17.190019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.870 ms 00:17:24.322 [2024-11-18 06:51:17.190032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.322 [2024-11-18 06:51:17.190143] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:24.322 [2024-11-18 06:51:17.190434] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:24.322 [2024-11-18 06:51:17.190452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.322 [2024-11-18 06:51:17.190466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:24.322 [2024-11-18 06:51:17.190476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:17:24.322 [2024-11-18 06:51:17.190485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.322 [2024-11-18 06:51:17.193006] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:24.322 [2024-11-18 06:51:17.198001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.322 [2024-11-18 06:51:17.198054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:24.322 [2024-11-18 06:51:17.198074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.998 ms 00:17:24.322 [2024-11-18 06:51:17.198088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.322 [2024-11-18 06:51:17.198189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.322 [2024-11-18 06:51:17.198201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:24.322 [2024-11-18 06:51:17.198211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:24.322 [2024-11-18 06:51:17.198219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.322 [2024-11-18 06:51:17.210218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.322 [2024-11-18 06:51:17.210264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:24.322 [2024-11-18 06:51:17.210277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.950 ms 00:17:24.322 [2024-11-18 06:51:17.210287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.322 [2024-11-18 06:51:17.210449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.322 [2024-11-18 06:51:17.210463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:24.322 [2024-11-18 06:51:17.210479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:24.322 [2024-11-18 06:51:17.210488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.322 [2024-11-18 06:51:17.210520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.322 [2024-11-18 06:51:17.210529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:24.322 [2024-11-18 06:51:17.210538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:24.322 [2024-11-18 06:51:17.210546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.322 [2024-11-18 06:51:17.210569] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:24.322 [2024-11-18 06:51:17.213362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.322 [2024-11-18 06:51:17.213404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:24.322 [2024-11-18 06:51:17.213415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.798 ms 00:17:24.322 [2024-11-18 06:51:17.213424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.322 [2024-11-18 06:51:17.213487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.322 [2024-11-18 06:51:17.213497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:24.322 [2024-11-18 06:51:17.213507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:24.322 [2024-11-18 06:51:17.213520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.322 [2024-11-18 06:51:17.213541] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:24.322 [2024-11-18 06:51:17.213565] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:24.322 [2024-11-18 06:51:17.213610] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:24.322 [2024-11-18 06:51:17.213632] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:24.322 [2024-11-18 06:51:17.213744] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:24.322 [2024-11-18 06:51:17.213758] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:24.322 [2024-11-18 06:51:17.213771] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:24.322 [2024-11-18 06:51:17.213782] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:24.322 [2024-11-18 06:51:17.213792] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:24.322 [2024-11-18 06:51:17.213802] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:24.322 [2024-11-18 06:51:17.213813] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:24.322 [2024-11-18 06:51:17.213821] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:24.322 [2024-11-18 06:51:17.213831] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:24.322 [2024-11-18 06:51:17.213842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.322 [2024-11-18 06:51:17.213850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:24.322 [2024-11-18 06:51:17.213864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:17:24.322 [2024-11-18 06:51:17.213872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.322 [2024-11-18 06:51:17.213960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.322 [2024-11-18 06:51:17.213970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:24.322 [2024-11-18 06:51:17.214000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:24.322 [2024-11-18 06:51:17.214013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.322 [2024-11-18 06:51:17.214120] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:24.322 [2024-11-18 06:51:17.214134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:24.322 [2024-11-18 06:51:17.214147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:24.322 [2024-11-18 06:51:17.214163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:24.322 [2024-11-18 06:51:17.214172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:24.322 [2024-11-18 06:51:17.214181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:24.322 [2024-11-18 06:51:17.214189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:24.322 [2024-11-18 06:51:17.214203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:24.322 [2024-11-18 06:51:17.214214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:24.322 [2024-11-18 06:51:17.214222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:24.322 [2024-11-18 06:51:17.214230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:24.322 [2024-11-18 06:51:17.214238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:24.322 [2024-11-18 06:51:17.214246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:24.322 [2024-11-18 06:51:17.214255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:24.322 [2024-11-18 06:51:17.214268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:24.322 [2024-11-18 06:51:17.214277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:24.322 [2024-11-18 06:51:17.214284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:24.323 [2024-11-18 06:51:17.214293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:24.323 [2024-11-18 06:51:17.214300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:24.323 [2024-11-18 06:51:17.214308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:24.323 [2024-11-18 06:51:17.214317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:24.323 [2024-11-18 06:51:17.214326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:24.323 [2024-11-18 06:51:17.214335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:24.323 [2024-11-18 06:51:17.214351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:24.323 [2024-11-18 06:51:17.214359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:24.323 [2024-11-18 06:51:17.214368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:24.323 [2024-11-18 06:51:17.214377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:24.323 [2024-11-18 06:51:17.214384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:24.323 [2024-11-18 06:51:17.214391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:24.323 [2024-11-18 06:51:17.214398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:24.323 [2024-11-18 06:51:17.214404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:24.323 [2024-11-18 06:51:17.214411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:24.323 [2024-11-18 06:51:17.214419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:24.323 [2024-11-18 06:51:17.214425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:24.323 [2024-11-18 06:51:17.214432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:24.323 [2024-11-18 06:51:17.214439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:24.323 [2024-11-18 06:51:17.214445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:24.323 [2024-11-18 06:51:17.214453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:24.323 [2024-11-18 06:51:17.214459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:24.323 [2024-11-18 06:51:17.214468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:24.323 [2024-11-18 06:51:17.214476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:24.323 [2024-11-18 06:51:17.214483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:24.323 [2024-11-18 06:51:17.214490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:24.323 [2024-11-18 06:51:17.214497] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:24.323 [2024-11-18 06:51:17.214506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:24.323 [2024-11-18 06:51:17.214513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:24.323 [2024-11-18 06:51:17.214525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:24.323 [2024-11-18 06:51:17.214535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:24.323 [2024-11-18 06:51:17.214543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:24.323 [2024-11-18 06:51:17.214551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:24.323 [2024-11-18 06:51:17.214558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:24.323 [2024-11-18 06:51:17.214565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:24.323 [2024-11-18 06:51:17.214574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:24.323 [2024-11-18 06:51:17.214586] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:24.323 [2024-11-18 06:51:17.214595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:24.323 [2024-11-18 06:51:17.214606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:24.323 [2024-11-18 06:51:17.214614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:24.323 [2024-11-18 06:51:17.214622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:24.323 [2024-11-18 06:51:17.214629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:24.323 [2024-11-18 06:51:17.214637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:24.323 [2024-11-18 06:51:17.214645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:24.323 [2024-11-18 06:51:17.214652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:24.323 [2024-11-18 06:51:17.214659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:24.323 [2024-11-18 06:51:17.214666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:24.323 [2024-11-18 06:51:17.214673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:24.323 [2024-11-18 06:51:17.214682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:24.323 [2024-11-18 06:51:17.214690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:24.323 [2024-11-18 06:51:17.214697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:24.323 [2024-11-18 06:51:17.214704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:24.323 [2024-11-18 06:51:17.214711] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:24.323 [2024-11-18 06:51:17.214719] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:24.323 [2024-11-18 06:51:17.214735] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:24.323 [2024-11-18 06:51:17.214763] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:24.323 [2024-11-18 06:51:17.214770] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:24.323 [2024-11-18 06:51:17.214777] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:24.323 [2024-11-18 06:51:17.214786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.323 [2024-11-18 06:51:17.214794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:24.323 [2024-11-18 06:51:17.214805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.735 ms 00:17:24.323 [2024-11-18 06:51:17.214815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.323 [2024-11-18 06:51:17.236015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.323 [2024-11-18 06:51:17.236060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:24.323 [2024-11-18 06:51:17.236073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.125 ms 00:17:24.323 [2024-11-18 06:51:17.236083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.323 [2024-11-18 06:51:17.236231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.323 [2024-11-18 06:51:17.236261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:24.323 [2024-11-18 06:51:17.236270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:17:24.323 [2024-11-18 06:51:17.236285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.323 [2024-11-18 06:51:17.263019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.323 [2024-11-18 06:51:17.263341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:24.323 [2024-11-18 06:51:17.263369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.706 ms 00:17:24.323 [2024-11-18 06:51:17.263381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.323 [2024-11-18 06:51:17.263508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.323 [2024-11-18 06:51:17.263529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:24.323 [2024-11-18 06:51:17.263541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:24.323 [2024-11-18 06:51:17.263553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.323 [2024-11-18 06:51:17.264351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.323 [2024-11-18 06:51:17.264391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:24.323 [2024-11-18 06:51:17.264407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.766 ms 00:17:24.323 [2024-11-18 06:51:17.264420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.323 [2024-11-18 06:51:17.264634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.323 [2024-11-18 06:51:17.264662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:24.323 [2024-11-18 06:51:17.264678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:17:24.323 [2024-11-18 06:51:17.264690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.323 [2024-11-18 06:51:17.276958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.324 [2024-11-18 06:51:17.277030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:24.324 [2024-11-18 06:51:17.277047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.238 ms 00:17:24.324 [2024-11-18 06:51:17.277056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.324 [2024-11-18 06:51:17.282126] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:24.324 [2024-11-18 06:51:17.282338] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:24.324 [2024-11-18 06:51:17.282359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.324 [2024-11-18 06:51:17.282370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:24.324 [2024-11-18 06:51:17.282382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.160 ms 00:17:24.324 [2024-11-18 06:51:17.282390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.324 [2024-11-18 06:51:17.299404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.324 [2024-11-18 06:51:17.299457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:24.324 [2024-11-18 06:51:17.299470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.922 ms 00:17:24.324 [2024-11-18 06:51:17.299479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.324 [2024-11-18 06:51:17.302817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.324 [2024-11-18 06:51:17.302870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:24.324 [2024-11-18 06:51:17.302882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.230 ms 00:17:24.324 [2024-11-18 06:51:17.302890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.324 [2024-11-18 06:51:17.305854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.324 [2024-11-18 06:51:17.306072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:24.324 [2024-11-18 06:51:17.306094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.890 ms 00:17:24.324 [2024-11-18 06:51:17.306102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.324 [2024-11-18 06:51:17.306502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.324 [2024-11-18 06:51:17.306527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:24.324 [2024-11-18 06:51:17.306538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:17:24.324 [2024-11-18 06:51:17.306547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.324 [2024-11-18 06:51:17.340439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.324 [2024-11-18 06:51:17.340494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:24.324 [2024-11-18 06:51:17.340508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.865 ms 00:17:24.324 [2024-11-18 06:51:17.340526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.324 [2024-11-18 06:51:17.350111] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:24.324 [2024-11-18 06:51:17.376073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.324 [2024-11-18 06:51:17.376128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:24.324 [2024-11-18 06:51:17.376143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.437 ms 00:17:24.324 [2024-11-18 06:51:17.376153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.324 [2024-11-18 06:51:17.376263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.324 [2024-11-18 06:51:17.376275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:24.324 [2024-11-18 06:51:17.376292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:24.324 [2024-11-18 06:51:17.376304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.324 [2024-11-18 06:51:17.376374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.324 [2024-11-18 06:51:17.376385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:24.324 [2024-11-18 06:51:17.376396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:24.324 [2024-11-18 06:51:17.376404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.324 [2024-11-18 06:51:17.376430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.324 [2024-11-18 06:51:17.376440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:24.324 [2024-11-18 06:51:17.376450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:24.324 [2024-11-18 06:51:17.376460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.324 [2024-11-18 06:51:17.376507] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:24.324 [2024-11-18 06:51:17.376523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.324 [2024-11-18 06:51:17.376531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:24.324 [2024-11-18 06:51:17.376540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:24.324 [2024-11-18 06:51:17.376550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.324 [2024-11-18 06:51:17.384187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.324 [2024-11-18 06:51:17.384431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:24.324 [2024-11-18 06:51:17.384453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.614 ms 00:17:24.324 [2024-11-18 06:51:17.384461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.324 [2024-11-18 06:51:17.384574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.324 [2024-11-18 06:51:17.384586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:24.324 [2024-11-18 06:51:17.384598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:24.324 [2024-11-18 06:51:17.384613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.324 [2024-11-18 06:51:17.385908] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:24.324 [2024-11-18 06:51:17.387526] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 198.598 ms, result 0 00:17:24.324 [2024-11-18 06:51:17.389192] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:24.324 [2024-11-18 06:51:17.396435] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:25.711  [2024-11-18T06:51:19.736Z] Copying: 20/256 [MB] (20 MBps) [2024-11-18T06:51:20.678Z] Copying: 35/256 [MB] (15 MBps) [2024-11-18T06:51:21.622Z] Copying: 55/256 [MB] (19 MBps) [2024-11-18T06:51:22.565Z] Copying: 70/256 [MB] (14 MBps) [2024-11-18T06:51:23.508Z] Copying: 85/256 [MB] (15 MBps) [2024-11-18T06:51:24.891Z] Copying: 103/256 [MB] (17 MBps) [2024-11-18T06:51:25.832Z] Copying: 124/256 [MB] (20 MBps) [2024-11-18T06:51:26.774Z] Copying: 142/256 [MB] (17 MBps) [2024-11-18T06:51:27.714Z] Copying: 156/256 [MB] (14 MBps) [2024-11-18T06:51:28.654Z] Copying: 170/256 [MB] (13 MBps) [2024-11-18T06:51:29.592Z] Copying: 185/256 [MB] (14 MBps) [2024-11-18T06:51:30.536Z] Copying: 205/256 [MB] (20 MBps) [2024-11-18T06:51:31.476Z] Copying: 222/256 [MB] (16 MBps) [2024-11-18T06:51:32.862Z] Copying: 238/256 [MB] (15 MBps) [2024-11-18T06:51:32.863Z] Copying: 256/256 [MB] (average 17 MBps)[2024-11-18 06:51:32.436400] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:39.776 [2024-11-18 06:51:32.438480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.776 [2024-11-18 06:51:32.438520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:39.776 [2024-11-18 06:51:32.438535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:39.776 [2024-11-18 06:51:32.438544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.776 [2024-11-18 06:51:32.438568] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:39.776 [2024-11-18 06:51:32.439353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.776 [2024-11-18 06:51:32.439400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:39.776 [2024-11-18 06:51:32.439412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.769 ms 00:17:39.776 [2024-11-18 06:51:32.439422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.776 [2024-11-18 06:51:32.439694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.776 [2024-11-18 06:51:32.439705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:39.776 [2024-11-18 06:51:32.439715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:17:39.776 [2024-11-18 06:51:32.439727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.776 [2024-11-18 06:51:32.443449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.776 [2024-11-18 06:51:32.443468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:39.776 [2024-11-18 06:51:32.443478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.705 ms 00:17:39.776 [2024-11-18 06:51:32.443490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.776 [2024-11-18 06:51:32.450451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.776 [2024-11-18 06:51:32.450483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:39.776 [2024-11-18 06:51:32.450495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.943 ms 00:17:39.776 [2024-11-18 06:51:32.450513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.776 [2024-11-18 06:51:32.453592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.776 [2024-11-18 06:51:32.453635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:39.776 [2024-11-18 06:51:32.453645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.009 ms 00:17:39.776 [2024-11-18 06:51:32.453654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.776 [2024-11-18 06:51:32.458727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.776 [2024-11-18 06:51:32.458778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:39.776 [2024-11-18 06:51:32.458789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.028 ms 00:17:39.776 [2024-11-18 06:51:32.458797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.776 [2024-11-18 06:51:32.458947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.776 [2024-11-18 06:51:32.458959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:39.776 [2024-11-18 06:51:32.458968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:39.776 [2024-11-18 06:51:32.458997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.776 [2024-11-18 06:51:32.462122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.776 [2024-11-18 06:51:32.462161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:39.776 [2024-11-18 06:51:32.462171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.104 ms 00:17:39.776 [2024-11-18 06:51:32.462179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.776 [2024-11-18 06:51:32.465014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.776 [2024-11-18 06:51:32.465052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:39.776 [2024-11-18 06:51:32.465061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.792 ms 00:17:39.776 [2024-11-18 06:51:32.465068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.776 [2024-11-18 06:51:32.467495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.776 [2024-11-18 06:51:32.467535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:39.776 [2024-11-18 06:51:32.467544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.385 ms 00:17:39.776 [2024-11-18 06:51:32.467552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.776 [2024-11-18 06:51:32.469600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.776 [2024-11-18 06:51:32.469638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:39.776 [2024-11-18 06:51:32.469647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.976 ms 00:17:39.776 [2024-11-18 06:51:32.469654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.776 [2024-11-18 06:51:32.469692] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:39.776 [2024-11-18 06:51:32.469709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.469719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.469727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.469735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.469743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.469751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.469759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.469767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.469775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.469783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.469791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.469799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.469806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.469814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.469821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.469837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.469846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.469854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.469861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.469869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.469876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.469884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.469891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.469899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.469907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.469914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.469921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.469929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.469936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.469943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.469950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.469957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.469965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.469972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.470001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.470009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.470016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.470024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.470031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.470039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.470046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.470054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.470062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.470069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:39.776 [2024-11-18 06:51:32.470077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:39.777 [2024-11-18 06:51:32.470528] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:39.777 [2024-11-18 06:51:32.470542] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ec0999a9-9a7d-450e-b3a6-a004ddc4ed37 00:17:39.777 [2024-11-18 06:51:32.470551] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:39.777 [2024-11-18 06:51:32.470559] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:39.777 [2024-11-18 06:51:32.470567] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:39.777 [2024-11-18 06:51:32.470575] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:39.777 [2024-11-18 06:51:32.470583] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:39.777 [2024-11-18 06:51:32.470591] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:39.777 [2024-11-18 06:51:32.470598] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:39.777 [2024-11-18 06:51:32.470605] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:39.777 [2024-11-18 06:51:32.470611] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:39.777 [2024-11-18 06:51:32.470623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.777 [2024-11-18 06:51:32.470634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:39.777 [2024-11-18 06:51:32.470644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.931 ms 00:17:39.777 [2024-11-18 06:51:32.470651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.777 [2024-11-18 06:51:32.472937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.777 [2024-11-18 06:51:32.472989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:39.777 [2024-11-18 06:51:32.473002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.266 ms 00:17:39.777 [2024-11-18 06:51:32.473011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.777 [2024-11-18 06:51:32.473153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.777 [2024-11-18 06:51:32.473164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:39.777 [2024-11-18 06:51:32.473174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:17:39.777 [2024-11-18 06:51:32.473182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.777 [2024-11-18 06:51:32.481085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.777 [2024-11-18 06:51:32.481126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:39.777 [2024-11-18 06:51:32.481138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.777 [2024-11-18 06:51:32.481147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.777 [2024-11-18 06:51:32.481250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.777 [2024-11-18 06:51:32.481260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:39.777 [2024-11-18 06:51:32.481268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.777 [2024-11-18 06:51:32.481277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.777 [2024-11-18 06:51:32.481321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.777 [2024-11-18 06:51:32.481331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:39.777 [2024-11-18 06:51:32.481339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.777 [2024-11-18 06:51:32.481347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.777 [2024-11-18 06:51:32.481367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.777 [2024-11-18 06:51:32.481376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:39.777 [2024-11-18 06:51:32.481383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.777 [2024-11-18 06:51:32.481391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.777 [2024-11-18 06:51:32.496209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.777 [2024-11-18 06:51:32.496255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:39.777 [2024-11-18 06:51:32.496267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.777 [2024-11-18 06:51:32.496277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.778 [2024-11-18 06:51:32.507793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.778 [2024-11-18 06:51:32.507840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:39.778 [2024-11-18 06:51:32.507852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.778 [2024-11-18 06:51:32.507862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.778 [2024-11-18 06:51:32.507916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.778 [2024-11-18 06:51:32.507926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:39.778 [2024-11-18 06:51:32.507935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.778 [2024-11-18 06:51:32.507944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.778 [2024-11-18 06:51:32.508002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.778 [2024-11-18 06:51:32.508015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:39.778 [2024-11-18 06:51:32.508024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.778 [2024-11-18 06:51:32.508036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.778 [2024-11-18 06:51:32.508117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.778 [2024-11-18 06:51:32.508127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:39.778 [2024-11-18 06:51:32.508135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.778 [2024-11-18 06:51:32.508143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.778 [2024-11-18 06:51:32.508176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.778 [2024-11-18 06:51:32.508185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:39.778 [2024-11-18 06:51:32.508196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.778 [2024-11-18 06:51:32.508208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.778 [2024-11-18 06:51:32.508252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.778 [2024-11-18 06:51:32.508271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:39.778 [2024-11-18 06:51:32.508280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.778 [2024-11-18 06:51:32.508288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.778 [2024-11-18 06:51:32.508342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.778 [2024-11-18 06:51:32.508353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:39.778 [2024-11-18 06:51:32.508364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.778 [2024-11-18 06:51:32.508373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.778 [2024-11-18 06:51:32.508530] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.134 ms, result 0 00:17:39.778 00:17:39.778 00:17:39.778 06:51:32 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:40.431 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:40.431 06:51:33 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:40.431 06:51:33 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:17:40.431 06:51:33 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:40.431 06:51:33 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:40.431 06:51:33 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:40.431 06:51:33 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:40.431 06:51:33 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 85495 00:17:40.431 06:51:33 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 85495 ']' 00:17:40.431 06:51:33 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 85495 00:17:40.431 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (85495) - No such process 00:17:40.431 Process with pid 85495 is not found 00:17:40.431 06:51:33 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 85495 is not found' 00:17:40.431 00:17:40.431 real 1m3.231s 00:17:40.431 user 1m25.951s 00:17:40.431 sys 0m5.319s 00:17:40.431 06:51:33 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:40.431 ************************************ 00:17:40.431 END TEST ftl_trim 00:17:40.431 ************************************ 00:17:40.431 06:51:33 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:40.431 06:51:33 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:40.431 06:51:33 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:40.431 06:51:33 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:40.431 06:51:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:40.692 ************************************ 00:17:40.692 START TEST ftl_restore 00:17:40.692 ************************************ 00:17:40.692 06:51:33 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:40.692 * Looking for test storage... 00:17:40.692 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:40.692 06:51:33 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:40.692 06:51:33 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:17:40.692 06:51:33 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:40.692 06:51:33 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:40.692 06:51:33 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:40.692 06:51:33 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:40.692 06:51:33 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:40.692 06:51:33 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:17:40.692 06:51:33 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:17:40.692 06:51:33 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:17:40.692 06:51:33 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:17:40.692 06:51:33 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:17:40.692 06:51:33 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:17:40.692 06:51:33 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:17:40.692 06:51:33 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:40.692 06:51:33 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:17:40.692 06:51:33 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:17:40.692 06:51:33 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:40.692 06:51:33 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:40.692 06:51:33 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:17:40.692 06:51:33 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:17:40.692 06:51:33 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:40.692 06:51:33 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:17:40.692 06:51:33 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:17:40.692 06:51:33 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:17:40.692 06:51:33 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:17:40.693 06:51:33 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:40.693 06:51:33 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:17:40.693 06:51:33 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:17:40.693 06:51:33 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:40.693 06:51:33 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:40.693 06:51:33 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:17:40.693 06:51:33 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:40.693 06:51:33 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:40.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.693 --rc genhtml_branch_coverage=1 00:17:40.693 --rc genhtml_function_coverage=1 00:17:40.693 --rc genhtml_legend=1 00:17:40.693 --rc geninfo_all_blocks=1 00:17:40.693 --rc geninfo_unexecuted_blocks=1 00:17:40.693 00:17:40.693 ' 00:17:40.693 06:51:33 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:40.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.693 --rc genhtml_branch_coverage=1 00:17:40.693 --rc genhtml_function_coverage=1 00:17:40.693 --rc genhtml_legend=1 00:17:40.693 --rc geninfo_all_blocks=1 00:17:40.693 --rc geninfo_unexecuted_blocks=1 00:17:40.693 00:17:40.693 ' 00:17:40.693 06:51:33 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:40.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.693 --rc genhtml_branch_coverage=1 00:17:40.693 --rc genhtml_function_coverage=1 00:17:40.693 --rc genhtml_legend=1 00:17:40.693 --rc geninfo_all_blocks=1 00:17:40.693 --rc geninfo_unexecuted_blocks=1 00:17:40.693 00:17:40.693 ' 00:17:40.693 06:51:33 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:40.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.693 --rc genhtml_branch_coverage=1 00:17:40.693 --rc genhtml_function_coverage=1 00:17:40.693 --rc genhtml_legend=1 00:17:40.693 --rc geninfo_all_blocks=1 00:17:40.693 --rc geninfo_unexecuted_blocks=1 00:17:40.693 00:17:40.693 ' 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.A4Pter6F5Y 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=85786 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 85786 00:17:40.693 06:51:33 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 85786 ']' 00:17:40.693 06:51:33 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.693 06:51:33 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:40.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.693 06:51:33 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.693 06:51:33 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:40.693 06:51:33 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:17:40.693 06:51:33 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:40.954 [2024-11-18 06:51:33.814019] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:17:40.954 [2024-11-18 06:51:33.814246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85786 ] 00:17:40.954 [2024-11-18 06:51:33.984427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.954 [2024-11-18 06:51:34.013133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.898 06:51:34 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:41.898 06:51:34 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:17:41.898 06:51:34 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:41.898 06:51:34 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:17:41.898 06:51:34 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:41.898 06:51:34 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:17:41.898 06:51:34 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:17:41.898 06:51:34 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:42.159 06:51:34 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:42.159 06:51:34 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:17:42.159 06:51:34 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:42.159 06:51:34 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:42.159 06:51:34 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:42.159 06:51:34 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:17:42.159 06:51:34 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:17:42.159 06:51:34 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:42.159 06:51:35 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:42.159 { 00:17:42.159 "name": "nvme0n1", 00:17:42.159 "aliases": [ 00:17:42.159 "a63aa62c-6187-4252-ae31-d0106ba89368" 00:17:42.159 ], 00:17:42.159 "product_name": "NVMe disk", 00:17:42.159 "block_size": 4096, 00:17:42.159 "num_blocks": 1310720, 00:17:42.159 "uuid": "a63aa62c-6187-4252-ae31-d0106ba89368", 00:17:42.159 "numa_id": -1, 00:17:42.159 "assigned_rate_limits": { 00:17:42.159 "rw_ios_per_sec": 0, 00:17:42.159 "rw_mbytes_per_sec": 0, 00:17:42.159 "r_mbytes_per_sec": 0, 00:17:42.159 "w_mbytes_per_sec": 0 00:17:42.159 }, 00:17:42.159 "claimed": true, 00:17:42.159 "claim_type": "read_many_write_one", 00:17:42.159 "zoned": false, 00:17:42.159 "supported_io_types": { 00:17:42.159 "read": true, 00:17:42.159 "write": true, 00:17:42.159 "unmap": true, 00:17:42.159 "flush": true, 00:17:42.159 "reset": true, 00:17:42.159 "nvme_admin": true, 00:17:42.159 "nvme_io": true, 00:17:42.159 "nvme_io_md": false, 00:17:42.159 "write_zeroes": true, 00:17:42.159 "zcopy": false, 00:17:42.159 "get_zone_info": false, 00:17:42.159 "zone_management": false, 00:17:42.159 "zone_append": false, 00:17:42.159 "compare": true, 00:17:42.159 "compare_and_write": false, 00:17:42.159 "abort": true, 00:17:42.159 "seek_hole": false, 00:17:42.159 "seek_data": false, 00:17:42.159 "copy": true, 00:17:42.159 "nvme_iov_md": false 00:17:42.159 }, 00:17:42.159 "driver_specific": { 00:17:42.159 "nvme": [ 00:17:42.159 { 00:17:42.159 "pci_address": "0000:00:11.0", 00:17:42.159 "trid": { 00:17:42.159 "trtype": "PCIe", 00:17:42.159 "traddr": "0000:00:11.0" 00:17:42.159 }, 00:17:42.159 "ctrlr_data": { 00:17:42.159 "cntlid": 0, 00:17:42.159 "vendor_id": "0x1b36", 00:17:42.159 "model_number": "QEMU NVMe Ctrl", 00:17:42.159 "serial_number": "12341", 00:17:42.159 "firmware_revision": "8.0.0", 00:17:42.159 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:42.159 "oacs": { 00:17:42.159 "security": 0, 00:17:42.159 "format": 1, 00:17:42.159 "firmware": 0, 00:17:42.159 "ns_manage": 1 00:17:42.159 }, 00:17:42.159 "multi_ctrlr": false, 00:17:42.159 "ana_reporting": false 00:17:42.159 }, 00:17:42.159 "vs": { 00:17:42.159 "nvme_version": "1.4" 00:17:42.159 }, 00:17:42.159 "ns_data": { 00:17:42.159 "id": 1, 00:17:42.159 "can_share": false 00:17:42.159 } 00:17:42.159 } 00:17:42.159 ], 00:17:42.159 "mp_policy": "active_passive" 00:17:42.159 } 00:17:42.159 } 00:17:42.159 ]' 00:17:42.159 06:51:35 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:42.420 06:51:35 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:17:42.420 06:51:35 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:42.420 06:51:35 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:42.420 06:51:35 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:42.420 06:51:35 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:17:42.420 06:51:35 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:17:42.420 06:51:35 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:42.420 06:51:35 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:17:42.420 06:51:35 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:42.420 06:51:35 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:42.681 06:51:35 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=f1992292-5c42-448e-b968-a4ecf9ae834a 00:17:42.681 06:51:35 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:17:42.681 06:51:35 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f1992292-5c42-448e-b968-a4ecf9ae834a 00:17:42.681 06:51:35 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:42.943 06:51:35 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=aadf3dae-766f-40c8-978b-74407abed8dd 00:17:42.943 06:51:35 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u aadf3dae-766f-40c8-978b-74407abed8dd 00:17:43.205 06:51:36 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=d2673f3d-4dc2-44e7-b692-10db9d945dc2 00:17:43.205 06:51:36 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:17:43.205 06:51:36 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d2673f3d-4dc2-44e7-b692-10db9d945dc2 00:17:43.205 06:51:36 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:17:43.205 06:51:36 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:43.205 06:51:36 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=d2673f3d-4dc2-44e7-b692-10db9d945dc2 00:17:43.205 06:51:36 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:17:43.205 06:51:36 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size d2673f3d-4dc2-44e7-b692-10db9d945dc2 00:17:43.205 06:51:36 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=d2673f3d-4dc2-44e7-b692-10db9d945dc2 00:17:43.205 06:51:36 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:43.205 06:51:36 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:17:43.205 06:51:36 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:17:43.205 06:51:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d2673f3d-4dc2-44e7-b692-10db9d945dc2 00:17:43.467 06:51:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:43.467 { 00:17:43.467 "name": "d2673f3d-4dc2-44e7-b692-10db9d945dc2", 00:17:43.467 "aliases": [ 00:17:43.467 "lvs/nvme0n1p0" 00:17:43.467 ], 00:17:43.467 "product_name": "Logical Volume", 00:17:43.467 "block_size": 4096, 00:17:43.467 "num_blocks": 26476544, 00:17:43.467 "uuid": "d2673f3d-4dc2-44e7-b692-10db9d945dc2", 00:17:43.467 "assigned_rate_limits": { 00:17:43.467 "rw_ios_per_sec": 0, 00:17:43.467 "rw_mbytes_per_sec": 0, 00:17:43.467 "r_mbytes_per_sec": 0, 00:17:43.467 "w_mbytes_per_sec": 0 00:17:43.467 }, 00:17:43.467 "claimed": false, 00:17:43.467 "zoned": false, 00:17:43.467 "supported_io_types": { 00:17:43.467 "read": true, 00:17:43.467 "write": true, 00:17:43.467 "unmap": true, 00:17:43.467 "flush": false, 00:17:43.467 "reset": true, 00:17:43.467 "nvme_admin": false, 00:17:43.467 "nvme_io": false, 00:17:43.467 "nvme_io_md": false, 00:17:43.467 "write_zeroes": true, 00:17:43.467 "zcopy": false, 00:17:43.467 "get_zone_info": false, 00:17:43.467 "zone_management": false, 00:17:43.467 "zone_append": false, 00:17:43.467 "compare": false, 00:17:43.467 "compare_and_write": false, 00:17:43.467 "abort": false, 00:17:43.467 "seek_hole": true, 00:17:43.467 "seek_data": true, 00:17:43.467 "copy": false, 00:17:43.467 "nvme_iov_md": false 00:17:43.467 }, 00:17:43.467 "driver_specific": { 00:17:43.467 "lvol": { 00:17:43.467 "lvol_store_uuid": "aadf3dae-766f-40c8-978b-74407abed8dd", 00:17:43.467 "base_bdev": "nvme0n1", 00:17:43.467 "thin_provision": true, 00:17:43.467 "num_allocated_clusters": 0, 00:17:43.467 "snapshot": false, 00:17:43.467 "clone": false, 00:17:43.467 "esnap_clone": false 00:17:43.467 } 00:17:43.467 } 00:17:43.467 } 00:17:43.467 ]' 00:17:43.467 06:51:36 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:43.467 06:51:36 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:17:43.467 06:51:36 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:43.467 06:51:36 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:43.467 06:51:36 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:43.467 06:51:36 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:17:43.467 06:51:36 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:17:43.467 06:51:36 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:17:43.467 06:51:36 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:43.729 06:51:36 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:43.729 06:51:36 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:43.729 06:51:36 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size d2673f3d-4dc2-44e7-b692-10db9d945dc2 00:17:43.729 06:51:36 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=d2673f3d-4dc2-44e7-b692-10db9d945dc2 00:17:43.729 06:51:36 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:43.729 06:51:36 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:17:43.729 06:51:36 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:17:43.729 06:51:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d2673f3d-4dc2-44e7-b692-10db9d945dc2 00:17:43.991 06:51:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:43.991 { 00:17:43.991 "name": "d2673f3d-4dc2-44e7-b692-10db9d945dc2", 00:17:43.991 "aliases": [ 00:17:43.991 "lvs/nvme0n1p0" 00:17:43.991 ], 00:17:43.991 "product_name": "Logical Volume", 00:17:43.991 "block_size": 4096, 00:17:43.991 "num_blocks": 26476544, 00:17:43.991 "uuid": "d2673f3d-4dc2-44e7-b692-10db9d945dc2", 00:17:43.991 "assigned_rate_limits": { 00:17:43.991 "rw_ios_per_sec": 0, 00:17:43.991 "rw_mbytes_per_sec": 0, 00:17:43.991 "r_mbytes_per_sec": 0, 00:17:43.991 "w_mbytes_per_sec": 0 00:17:43.991 }, 00:17:43.991 "claimed": false, 00:17:43.991 "zoned": false, 00:17:43.991 "supported_io_types": { 00:17:43.991 "read": true, 00:17:43.991 "write": true, 00:17:43.991 "unmap": true, 00:17:43.991 "flush": false, 00:17:43.991 "reset": true, 00:17:43.991 "nvme_admin": false, 00:17:43.991 "nvme_io": false, 00:17:43.991 "nvme_io_md": false, 00:17:43.991 "write_zeroes": true, 00:17:43.991 "zcopy": false, 00:17:43.991 "get_zone_info": false, 00:17:43.991 "zone_management": false, 00:17:43.991 "zone_append": false, 00:17:43.991 "compare": false, 00:17:43.991 "compare_and_write": false, 00:17:43.991 "abort": false, 00:17:43.991 "seek_hole": true, 00:17:43.991 "seek_data": true, 00:17:43.991 "copy": false, 00:17:43.991 "nvme_iov_md": false 00:17:43.991 }, 00:17:43.991 "driver_specific": { 00:17:43.991 "lvol": { 00:17:43.991 "lvol_store_uuid": "aadf3dae-766f-40c8-978b-74407abed8dd", 00:17:43.991 "base_bdev": "nvme0n1", 00:17:43.991 "thin_provision": true, 00:17:43.991 "num_allocated_clusters": 0, 00:17:43.991 "snapshot": false, 00:17:43.991 "clone": false, 00:17:43.991 "esnap_clone": false 00:17:43.991 } 00:17:43.991 } 00:17:43.991 } 00:17:43.991 ]' 00:17:43.991 06:51:36 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:43.991 06:51:36 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:17:43.991 06:51:36 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:43.991 06:51:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:43.991 06:51:37 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:43.991 06:51:37 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:17:43.991 06:51:37 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:17:43.991 06:51:37 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:44.252 06:51:37 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:44.252 06:51:37 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size d2673f3d-4dc2-44e7-b692-10db9d945dc2 00:17:44.252 06:51:37 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=d2673f3d-4dc2-44e7-b692-10db9d945dc2 00:17:44.252 06:51:37 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:44.252 06:51:37 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:17:44.252 06:51:37 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:17:44.252 06:51:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d2673f3d-4dc2-44e7-b692-10db9d945dc2 00:17:44.514 06:51:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:44.514 { 00:17:44.514 "name": "d2673f3d-4dc2-44e7-b692-10db9d945dc2", 00:17:44.514 "aliases": [ 00:17:44.514 "lvs/nvme0n1p0" 00:17:44.514 ], 00:17:44.514 "product_name": "Logical Volume", 00:17:44.514 "block_size": 4096, 00:17:44.514 "num_blocks": 26476544, 00:17:44.514 "uuid": "d2673f3d-4dc2-44e7-b692-10db9d945dc2", 00:17:44.514 "assigned_rate_limits": { 00:17:44.514 "rw_ios_per_sec": 0, 00:17:44.514 "rw_mbytes_per_sec": 0, 00:17:44.514 "r_mbytes_per_sec": 0, 00:17:44.514 "w_mbytes_per_sec": 0 00:17:44.514 }, 00:17:44.514 "claimed": false, 00:17:44.514 "zoned": false, 00:17:44.514 "supported_io_types": { 00:17:44.514 "read": true, 00:17:44.514 "write": true, 00:17:44.514 "unmap": true, 00:17:44.514 "flush": false, 00:17:44.514 "reset": true, 00:17:44.514 "nvme_admin": false, 00:17:44.514 "nvme_io": false, 00:17:44.514 "nvme_io_md": false, 00:17:44.514 "write_zeroes": true, 00:17:44.514 "zcopy": false, 00:17:44.514 "get_zone_info": false, 00:17:44.514 "zone_management": false, 00:17:44.514 "zone_append": false, 00:17:44.514 "compare": false, 00:17:44.514 "compare_and_write": false, 00:17:44.514 "abort": false, 00:17:44.514 "seek_hole": true, 00:17:44.514 "seek_data": true, 00:17:44.514 "copy": false, 00:17:44.514 "nvme_iov_md": false 00:17:44.514 }, 00:17:44.514 "driver_specific": { 00:17:44.514 "lvol": { 00:17:44.514 "lvol_store_uuid": "aadf3dae-766f-40c8-978b-74407abed8dd", 00:17:44.514 "base_bdev": "nvme0n1", 00:17:44.514 "thin_provision": true, 00:17:44.514 "num_allocated_clusters": 0, 00:17:44.514 "snapshot": false, 00:17:44.514 "clone": false, 00:17:44.514 "esnap_clone": false 00:17:44.514 } 00:17:44.514 } 00:17:44.514 } 00:17:44.514 ]' 00:17:44.514 06:51:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:44.514 06:51:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:17:44.514 06:51:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:44.514 06:51:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:44.514 06:51:37 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:44.514 06:51:37 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:17:44.514 06:51:37 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:44.514 06:51:37 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d d2673f3d-4dc2-44e7-b692-10db9d945dc2 --l2p_dram_limit 10' 00:17:44.514 06:51:37 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:44.514 06:51:37 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:44.514 06:51:37 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:44.514 06:51:37 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:44.514 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:44.514 06:51:37 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d2673f3d-4dc2-44e7-b692-10db9d945dc2 --l2p_dram_limit 10 -c nvc0n1p0 00:17:44.776 [2024-11-18 06:51:37.699684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.776 [2024-11-18 06:51:37.699724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:44.776 [2024-11-18 06:51:37.699739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:44.776 [2024-11-18 06:51:37.699747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.776 [2024-11-18 06:51:37.699790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.776 [2024-11-18 06:51:37.699799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:44.776 [2024-11-18 06:51:37.699807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:44.776 [2024-11-18 06:51:37.699817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.776 [2024-11-18 06:51:37.699835] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:44.776 [2024-11-18 06:51:37.700122] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:44.776 [2024-11-18 06:51:37.700136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.776 [2024-11-18 06:51:37.700144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:44.776 [2024-11-18 06:51:37.700151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:17:44.776 [2024-11-18 06:51:37.700158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.776 [2024-11-18 06:51:37.700181] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0cc73ce1-3fae-4d36-91d2-6119a87f6d65 00:17:44.776 [2024-11-18 06:51:37.701134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.776 [2024-11-18 06:51:37.701155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:44.777 [2024-11-18 06:51:37.701166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:44.777 [2024-11-18 06:51:37.701173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.777 [2024-11-18 06:51:37.705740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.777 [2024-11-18 06:51:37.705767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:44.777 [2024-11-18 06:51:37.705776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.511 ms 00:17:44.777 [2024-11-18 06:51:37.705789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.777 [2024-11-18 06:51:37.705845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.777 [2024-11-18 06:51:37.705855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:44.777 [2024-11-18 06:51:37.705862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:44.777 [2024-11-18 06:51:37.705868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.777 [2024-11-18 06:51:37.705905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.777 [2024-11-18 06:51:37.705912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:44.777 [2024-11-18 06:51:37.705920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:44.777 [2024-11-18 06:51:37.705925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.777 [2024-11-18 06:51:37.705943] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:44.777 [2024-11-18 06:51:37.707327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.777 [2024-11-18 06:51:37.707358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:44.777 [2024-11-18 06:51:37.707365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.390 ms 00:17:44.777 [2024-11-18 06:51:37.707372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.777 [2024-11-18 06:51:37.707399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.777 [2024-11-18 06:51:37.707406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:44.777 [2024-11-18 06:51:37.707413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:44.777 [2024-11-18 06:51:37.707421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.777 [2024-11-18 06:51:37.707434] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:44.777 [2024-11-18 06:51:37.707538] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:44.777 [2024-11-18 06:51:37.707547] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:44.777 [2024-11-18 06:51:37.707562] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:44.777 [2024-11-18 06:51:37.707570] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:44.777 [2024-11-18 06:51:37.707582] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:44.777 [2024-11-18 06:51:37.707588] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:44.777 [2024-11-18 06:51:37.707597] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:44.777 [2024-11-18 06:51:37.707603] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:44.777 [2024-11-18 06:51:37.707610] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:44.777 [2024-11-18 06:51:37.707616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.777 [2024-11-18 06:51:37.707623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:44.777 [2024-11-18 06:51:37.707629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:17:44.777 [2024-11-18 06:51:37.707635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.777 [2024-11-18 06:51:37.707698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.777 [2024-11-18 06:51:37.707710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:44.777 [2024-11-18 06:51:37.707715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:44.777 [2024-11-18 06:51:37.707722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.777 [2024-11-18 06:51:37.707797] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:44.777 [2024-11-18 06:51:37.707806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:44.777 [2024-11-18 06:51:37.707812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:44.777 [2024-11-18 06:51:37.707820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.777 [2024-11-18 06:51:37.707826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:44.777 [2024-11-18 06:51:37.707833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:44.777 [2024-11-18 06:51:37.707838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:44.777 [2024-11-18 06:51:37.707846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:44.777 [2024-11-18 06:51:37.707851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:44.777 [2024-11-18 06:51:37.707858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:44.777 [2024-11-18 06:51:37.707863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:44.777 [2024-11-18 06:51:37.707869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:44.777 [2024-11-18 06:51:37.707874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:44.777 [2024-11-18 06:51:37.707882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:44.777 [2024-11-18 06:51:37.707887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:44.777 [2024-11-18 06:51:37.707895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.777 [2024-11-18 06:51:37.707900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:44.777 [2024-11-18 06:51:37.707907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:44.777 [2024-11-18 06:51:37.707912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.777 [2024-11-18 06:51:37.707919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:44.777 [2024-11-18 06:51:37.707924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:44.777 [2024-11-18 06:51:37.707930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.777 [2024-11-18 06:51:37.707935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:44.777 [2024-11-18 06:51:37.707941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:44.777 [2024-11-18 06:51:37.707946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.777 [2024-11-18 06:51:37.707952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:44.777 [2024-11-18 06:51:37.707957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:44.777 [2024-11-18 06:51:37.707963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.777 [2024-11-18 06:51:37.707969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:44.777 [2024-11-18 06:51:37.707985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:44.777 [2024-11-18 06:51:37.707991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.777 [2024-11-18 06:51:37.707997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:44.777 [2024-11-18 06:51:37.708002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:44.777 [2024-11-18 06:51:37.708010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:44.777 [2024-11-18 06:51:37.708015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:44.777 [2024-11-18 06:51:37.708021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:44.777 [2024-11-18 06:51:37.708026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:44.777 [2024-11-18 06:51:37.708032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:44.777 [2024-11-18 06:51:37.708038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:44.777 [2024-11-18 06:51:37.708044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.777 [2024-11-18 06:51:37.708049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:44.777 [2024-11-18 06:51:37.708055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:44.777 [2024-11-18 06:51:37.708060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.777 [2024-11-18 06:51:37.708067] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:44.777 [2024-11-18 06:51:37.708073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:44.777 [2024-11-18 06:51:37.708081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:44.777 [2024-11-18 06:51:37.708086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.777 [2024-11-18 06:51:37.708093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:44.777 [2024-11-18 06:51:37.708098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:44.777 [2024-11-18 06:51:37.708105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:44.777 [2024-11-18 06:51:37.708110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:44.777 [2024-11-18 06:51:37.708116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:44.778 [2024-11-18 06:51:37.708122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:44.778 [2024-11-18 06:51:37.708131] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:44.778 [2024-11-18 06:51:37.708139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:44.778 [2024-11-18 06:51:37.708147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:44.778 [2024-11-18 06:51:37.708152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:44.778 [2024-11-18 06:51:37.708160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:44.778 [2024-11-18 06:51:37.708165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:44.778 [2024-11-18 06:51:37.708172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:44.778 [2024-11-18 06:51:37.708177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:44.778 [2024-11-18 06:51:37.708186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:44.778 [2024-11-18 06:51:37.708191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:44.778 [2024-11-18 06:51:37.708198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:44.778 [2024-11-18 06:51:37.708203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:44.778 [2024-11-18 06:51:37.708210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:44.778 [2024-11-18 06:51:37.708215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:44.778 [2024-11-18 06:51:37.708222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:44.778 [2024-11-18 06:51:37.708228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:44.778 [2024-11-18 06:51:37.708235] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:44.778 [2024-11-18 06:51:37.708241] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:44.778 [2024-11-18 06:51:37.708249] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:44.778 [2024-11-18 06:51:37.708254] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:44.778 [2024-11-18 06:51:37.708261] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:44.778 [2024-11-18 06:51:37.708267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:44.778 [2024-11-18 06:51:37.708274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.778 [2024-11-18 06:51:37.708279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:44.778 [2024-11-18 06:51:37.708287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:17:44.778 [2024-11-18 06:51:37.708293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.778 [2024-11-18 06:51:37.708327] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:44.778 [2024-11-18 06:51:37.708334] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:48.992 [2024-11-18 06:51:41.479706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.992 [2024-11-18 06:51:41.479804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:48.992 [2024-11-18 06:51:41.479824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3771.357 ms 00:17:48.992 [2024-11-18 06:51:41.479834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.992 [2024-11-18 06:51:41.494136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.992 [2024-11-18 06:51:41.494191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:48.992 [2024-11-18 06:51:41.494208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.175 ms 00:17:48.992 [2024-11-18 06:51:41.494217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.992 [2024-11-18 06:51:41.494345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.992 [2024-11-18 06:51:41.494363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:48.993 [2024-11-18 06:51:41.494375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:17:48.993 [2024-11-18 06:51:41.494383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.993 [2024-11-18 06:51:41.506752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.993 [2024-11-18 06:51:41.506808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:48.993 [2024-11-18 06:51:41.506849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.322 ms 00:17:48.993 [2024-11-18 06:51:41.506863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.993 [2024-11-18 06:51:41.506902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.993 [2024-11-18 06:51:41.506910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:48.993 [2024-11-18 06:51:41.506922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:48.993 [2024-11-18 06:51:41.506930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.993 [2024-11-18 06:51:41.507522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.993 [2024-11-18 06:51:41.507570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:48.993 [2024-11-18 06:51:41.507590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.506 ms 00:17:48.993 [2024-11-18 06:51:41.507599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.993 [2024-11-18 06:51:41.507725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.993 [2024-11-18 06:51:41.507740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:48.993 [2024-11-18 06:51:41.507752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:17:48.993 [2024-11-18 06:51:41.507761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.993 [2024-11-18 06:51:41.516448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.993 [2024-11-18 06:51:41.516499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:48.993 [2024-11-18 06:51:41.516512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.662 ms 00:17:48.993 [2024-11-18 06:51:41.516519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.993 [2024-11-18 06:51:41.526218] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:48.993 [2024-11-18 06:51:41.529916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.993 [2024-11-18 06:51:41.529968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:48.993 [2024-11-18 06:51:41.529991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.328 ms 00:17:48.993 [2024-11-18 06:51:41.530003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.993 [2024-11-18 06:51:41.624199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.993 [2024-11-18 06:51:41.624279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:48.993 [2024-11-18 06:51:41.624300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.163 ms 00:17:48.993 [2024-11-18 06:51:41.624315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.993 [2024-11-18 06:51:41.624528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.993 [2024-11-18 06:51:41.624544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:48.993 [2024-11-18 06:51:41.624560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:17:48.993 [2024-11-18 06:51:41.624571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.993 [2024-11-18 06:51:41.631301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.993 [2024-11-18 06:51:41.631365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:48.993 [2024-11-18 06:51:41.631378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.707 ms 00:17:48.993 [2024-11-18 06:51:41.631398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.993 [2024-11-18 06:51:41.637174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.993 [2024-11-18 06:51:41.637238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:48.993 [2024-11-18 06:51:41.637250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.722 ms 00:17:48.993 [2024-11-18 06:51:41.637260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.993 [2024-11-18 06:51:41.637613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.993 [2024-11-18 06:51:41.637646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:48.993 [2024-11-18 06:51:41.637657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:17:48.993 [2024-11-18 06:51:41.637670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.993 [2024-11-18 06:51:41.682647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.993 [2024-11-18 06:51:41.682717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:48.993 [2024-11-18 06:51:41.682729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.953 ms 00:17:48.993 [2024-11-18 06:51:41.682747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.993 [2024-11-18 06:51:41.689935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.993 [2024-11-18 06:51:41.690019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:48.993 [2024-11-18 06:51:41.690037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.099 ms 00:17:48.993 [2024-11-18 06:51:41.690048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.993 [2024-11-18 06:51:41.696389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.993 [2024-11-18 06:51:41.696439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:48.993 [2024-11-18 06:51:41.696450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.293 ms 00:17:48.993 [2024-11-18 06:51:41.696460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.993 [2024-11-18 06:51:41.702668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.993 [2024-11-18 06:51:41.702726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:48.993 [2024-11-18 06:51:41.702736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.159 ms 00:17:48.993 [2024-11-18 06:51:41.702749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.993 [2024-11-18 06:51:41.702802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.993 [2024-11-18 06:51:41.702826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:48.993 [2024-11-18 06:51:41.702836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:48.993 [2024-11-18 06:51:41.702855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.993 [2024-11-18 06:51:41.702947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.993 [2024-11-18 06:51:41.702960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:48.993 [2024-11-18 06:51:41.702969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:48.993 [2024-11-18 06:51:41.702998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.993 [2024-11-18 06:51:41.704215] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4004.051 ms, result 0 00:17:48.993 { 00:17:48.993 "name": "ftl0", 00:17:48.993 "uuid": "0cc73ce1-3fae-4d36-91d2-6119a87f6d65" 00:17:48.993 } 00:17:48.993 06:51:41 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:48.993 06:51:41 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:48.993 06:51:41 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:17:48.993 06:51:41 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:49.256 [2024-11-18 06:51:42.139492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.256 [2024-11-18 06:51:42.139550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:49.256 [2024-11-18 06:51:42.139565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:49.256 [2024-11-18 06:51:42.139575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.256 [2024-11-18 06:51:42.139603] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:49.256 [2024-11-18 06:51:42.140376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.257 [2024-11-18 06:51:42.140424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:49.257 [2024-11-18 06:51:42.140436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.757 ms 00:17:49.257 [2024-11-18 06:51:42.140449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.257 [2024-11-18 06:51:42.140714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.257 [2024-11-18 06:51:42.140731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:49.257 [2024-11-18 06:51:42.140741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:17:49.257 [2024-11-18 06:51:42.140757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.257 [2024-11-18 06:51:42.144040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.257 [2024-11-18 06:51:42.144071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:49.257 [2024-11-18 06:51:42.144080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.261 ms 00:17:49.257 [2024-11-18 06:51:42.144090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.257 [2024-11-18 06:51:42.150218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.257 [2024-11-18 06:51:42.150262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:49.257 [2024-11-18 06:51:42.150274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.109 ms 00:17:49.257 [2024-11-18 06:51:42.150285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.257 [2024-11-18 06:51:42.153170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.257 [2024-11-18 06:51:42.153232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:49.257 [2024-11-18 06:51:42.153242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.775 ms 00:17:49.257 [2024-11-18 06:51:42.153252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.257 [2024-11-18 06:51:42.159766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.257 [2024-11-18 06:51:42.159836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:49.257 [2024-11-18 06:51:42.159848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.464 ms 00:17:49.257 [2024-11-18 06:51:42.159859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.257 [2024-11-18 06:51:42.160023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.257 [2024-11-18 06:51:42.160042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:49.257 [2024-11-18 06:51:42.160055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:17:49.257 [2024-11-18 06:51:42.160065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.257 [2024-11-18 06:51:42.163612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.257 [2024-11-18 06:51:42.163675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:49.257 [2024-11-18 06:51:42.163686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.527 ms 00:17:49.257 [2024-11-18 06:51:42.163696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.257 [2024-11-18 06:51:42.166457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.257 [2024-11-18 06:51:42.166517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:49.257 [2024-11-18 06:51:42.166527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.711 ms 00:17:49.257 [2024-11-18 06:51:42.166537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.257 [2024-11-18 06:51:42.168768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.257 [2024-11-18 06:51:42.168827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:49.257 [2024-11-18 06:51:42.168837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.182 ms 00:17:49.257 [2024-11-18 06:51:42.168847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.257 [2024-11-18 06:51:42.171148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.257 [2024-11-18 06:51:42.171210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:49.257 [2024-11-18 06:51:42.171220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.227 ms 00:17:49.257 [2024-11-18 06:51:42.171230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.257 [2024-11-18 06:51:42.171277] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:49.257 [2024-11-18 06:51:42.171295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:49.257 [2024-11-18 06:51:42.171306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:49.257 [2024-11-18 06:51:42.171317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:49.257 [2024-11-18 06:51:42.171325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:49.257 [2024-11-18 06:51:42.171342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:49.257 [2024-11-18 06:51:42.171350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:49.257 [2024-11-18 06:51:42.171360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:49.257 [2024-11-18 06:51:42.171368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:49.257 [2024-11-18 06:51:42.171378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:49.257 [2024-11-18 06:51:42.171386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:49.257 [2024-11-18 06:51:42.171395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:49.257 [2024-11-18 06:51:42.171403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:49.257 [2024-11-18 06:51:42.171413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:49.257 [2024-11-18 06:51:42.171421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:49.257 [2024-11-18 06:51:42.171430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:49.257 [2024-11-18 06:51:42.171438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:49.257 [2024-11-18 06:51:42.171449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:49.257 [2024-11-18 06:51:42.171456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:49.257 [2024-11-18 06:51:42.171466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:49.257 [2024-11-18 06:51:42.171473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:49.257 [2024-11-18 06:51:42.171485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:49.257 [2024-11-18 06:51:42.171494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:49.257 [2024-11-18 06:51:42.171503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:49.257 [2024-11-18 06:51:42.171510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:49.257 [2024-11-18 06:51:42.171520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:49.257 [2024-11-18 06:51:42.171527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:49.257 [2024-11-18 06:51:42.171536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:49.257 [2024-11-18 06:51:42.171543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:49.257 [2024-11-18 06:51:42.171553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:49.257 [2024-11-18 06:51:42.171562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:49.257 [2024-11-18 06:51:42.171573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.171991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.172000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.172011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.172019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.172029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.172037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.172047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.172055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.172065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.172073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.172084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.172092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.172104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.172117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.172127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.172135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.172144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.172152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.172162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.172170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.172180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.172189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.172199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.172212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.172222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.172230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.172241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.172250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:49.258 [2024-11-18 06:51:42.172270] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:49.258 [2024-11-18 06:51:42.172284] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0cc73ce1-3fae-4d36-91d2-6119a87f6d65 00:17:49.259 [2024-11-18 06:51:42.172296] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:49.259 [2024-11-18 06:51:42.172303] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:49.259 [2024-11-18 06:51:42.172313] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:49.259 [2024-11-18 06:51:42.172322] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:49.259 [2024-11-18 06:51:42.172332] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:49.259 [2024-11-18 06:51:42.172344] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:49.259 [2024-11-18 06:51:42.172354] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:49.259 [2024-11-18 06:51:42.172361] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:49.259 [2024-11-18 06:51:42.172370] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:49.259 [2024-11-18 06:51:42.172378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.259 [2024-11-18 06:51:42.172388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:49.259 [2024-11-18 06:51:42.172396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.103 ms 00:17:49.259 [2024-11-18 06:51:42.172406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.259 [2024-11-18 06:51:42.174793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.259 [2024-11-18 06:51:42.174856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:49.259 [2024-11-18 06:51:42.174870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.364 ms 00:17:49.259 [2024-11-18 06:51:42.174883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.259 [2024-11-18 06:51:42.175024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.259 [2024-11-18 06:51:42.175036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:49.259 [2024-11-18 06:51:42.175046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:17:49.259 [2024-11-18 06:51:42.175056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.259 [2024-11-18 06:51:42.183555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.259 [2024-11-18 06:51:42.183620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:49.259 [2024-11-18 06:51:42.183632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.259 [2024-11-18 06:51:42.183645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.259 [2024-11-18 06:51:42.183714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.259 [2024-11-18 06:51:42.183726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:49.259 [2024-11-18 06:51:42.183734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.259 [2024-11-18 06:51:42.183744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.259 [2024-11-18 06:51:42.183806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.259 [2024-11-18 06:51:42.183821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:49.259 [2024-11-18 06:51:42.183830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.259 [2024-11-18 06:51:42.183840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.259 [2024-11-18 06:51:42.183860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.259 [2024-11-18 06:51:42.183870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:49.259 [2024-11-18 06:51:42.183878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.259 [2024-11-18 06:51:42.183888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.259 [2024-11-18 06:51:42.197933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.259 [2024-11-18 06:51:42.198009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:49.259 [2024-11-18 06:51:42.198022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.259 [2024-11-18 06:51:42.198036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.259 [2024-11-18 06:51:42.208635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.259 [2024-11-18 06:51:42.208696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:49.259 [2024-11-18 06:51:42.208707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.259 [2024-11-18 06:51:42.208718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.259 [2024-11-18 06:51:42.208789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.259 [2024-11-18 06:51:42.208804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:49.259 [2024-11-18 06:51:42.208813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.259 [2024-11-18 06:51:42.208824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.259 [2024-11-18 06:51:42.208870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.259 [2024-11-18 06:51:42.208885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:49.259 [2024-11-18 06:51:42.208892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.259 [2024-11-18 06:51:42.208902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.259 [2024-11-18 06:51:42.209019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.259 [2024-11-18 06:51:42.209033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:49.259 [2024-11-18 06:51:42.209041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.259 [2024-11-18 06:51:42.209051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.259 [2024-11-18 06:51:42.209082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.259 [2024-11-18 06:51:42.209095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:49.259 [2024-11-18 06:51:42.209103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.259 [2024-11-18 06:51:42.209113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.259 [2024-11-18 06:51:42.209152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.259 [2024-11-18 06:51:42.209165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:49.259 [2024-11-18 06:51:42.209174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.259 [2024-11-18 06:51:42.209189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.259 [2024-11-18 06:51:42.209233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.259 [2024-11-18 06:51:42.209247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:49.259 [2024-11-18 06:51:42.209257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.259 [2024-11-18 06:51:42.209268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.259 [2024-11-18 06:51:42.209412] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.881 ms, result 0 00:17:49.259 true 00:17:49.259 06:51:42 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 85786 00:17:49.259 06:51:42 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 85786 ']' 00:17:49.259 06:51:42 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 85786 00:17:49.259 06:51:42 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:17:49.259 06:51:42 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:49.259 06:51:42 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85786 00:17:49.259 06:51:42 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:49.259 06:51:42 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:49.259 killing process with pid 85786 00:17:49.259 06:51:42 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85786' 00:17:49.259 06:51:42 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 85786 00:17:49.259 06:51:42 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 85786 00:17:54.553 06:51:46 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:17:57.858 262144+0 records in 00:17:57.858 262144+0 records out 00:17:57.858 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.77139 s, 285 MB/s 00:17:57.858 06:51:50 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:17:59.774 06:51:52 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:59.774 [2024-11-18 06:51:52.515395] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:17:59.774 [2024-11-18 06:51:52.515497] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85999 ] 00:17:59.774 [2024-11-18 06:51:52.669000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.774 [2024-11-18 06:51:52.690359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.774 [2024-11-18 06:51:52.786173] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:59.774 [2024-11-18 06:51:52.786253] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:00.038 [2024-11-18 06:51:52.947182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.038 [2024-11-18 06:51:52.947240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:00.038 [2024-11-18 06:51:52.947255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:00.038 [2024-11-18 06:51:52.947264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.038 [2024-11-18 06:51:52.947322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.038 [2024-11-18 06:51:52.947333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:00.038 [2024-11-18 06:51:52.947341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:00.038 [2024-11-18 06:51:52.947349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.038 [2024-11-18 06:51:52.947373] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:00.038 [2024-11-18 06:51:52.947747] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:00.038 [2024-11-18 06:51:52.947791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.038 [2024-11-18 06:51:52.947799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:00.038 [2024-11-18 06:51:52.947809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:18:00.038 [2024-11-18 06:51:52.947820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.038 [2024-11-18 06:51:52.949512] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:00.038 [2024-11-18 06:51:52.953217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.038 [2024-11-18 06:51:52.953265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:00.038 [2024-11-18 06:51:52.953285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.707 ms 00:18:00.038 [2024-11-18 06:51:52.953300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.038 [2024-11-18 06:51:52.953374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.038 [2024-11-18 06:51:52.953390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:00.038 [2024-11-18 06:51:52.953399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:00.038 [2024-11-18 06:51:52.953407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.038 [2024-11-18 06:51:52.961359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.038 [2024-11-18 06:51:52.961404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:00.038 [2024-11-18 06:51:52.961421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.906 ms 00:18:00.038 [2024-11-18 06:51:52.961438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.038 [2024-11-18 06:51:52.961540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.038 [2024-11-18 06:51:52.961550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:00.038 [2024-11-18 06:51:52.961563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:00.038 [2024-11-18 06:51:52.961574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.038 [2024-11-18 06:51:52.961628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.038 [2024-11-18 06:51:52.961638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:00.038 [2024-11-18 06:51:52.961646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:00.038 [2024-11-18 06:51:52.961654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.038 [2024-11-18 06:51:52.961683] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:00.038 [2024-11-18 06:51:52.963785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.038 [2024-11-18 06:51:52.963992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:00.038 [2024-11-18 06:51:52.964012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.109 ms 00:18:00.038 [2024-11-18 06:51:52.964022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.038 [2024-11-18 06:51:52.964059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.038 [2024-11-18 06:51:52.964068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:00.038 [2024-11-18 06:51:52.964077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:00.038 [2024-11-18 06:51:52.964085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.038 [2024-11-18 06:51:52.964117] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:00.038 [2024-11-18 06:51:52.964138] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:00.038 [2024-11-18 06:51:52.964174] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:00.038 [2024-11-18 06:51:52.964193] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:00.038 [2024-11-18 06:51:52.964301] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:00.038 [2024-11-18 06:51:52.964312] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:00.038 [2024-11-18 06:51:52.964327] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:00.038 [2024-11-18 06:51:52.964341] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:00.038 [2024-11-18 06:51:52.964350] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:00.038 [2024-11-18 06:51:52.964361] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:00.038 [2024-11-18 06:51:52.964369] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:00.038 [2024-11-18 06:51:52.964376] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:00.038 [2024-11-18 06:51:52.964387] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:00.038 [2024-11-18 06:51:52.964396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.038 [2024-11-18 06:51:52.964408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:00.038 [2024-11-18 06:51:52.964415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:18:00.038 [2024-11-18 06:51:52.964425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.038 [2024-11-18 06:51:52.964507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.038 [2024-11-18 06:51:52.964520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:00.038 [2024-11-18 06:51:52.964529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:00.038 [2024-11-18 06:51:52.964537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.038 [2024-11-18 06:51:52.964635] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:00.038 [2024-11-18 06:51:52.964650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:00.038 [2024-11-18 06:51:52.964660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:00.038 [2024-11-18 06:51:52.964668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.038 [2024-11-18 06:51:52.964681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:00.038 [2024-11-18 06:51:52.964697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:00.038 [2024-11-18 06:51:52.964706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:00.038 [2024-11-18 06:51:52.964713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:00.038 [2024-11-18 06:51:52.964722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:00.038 [2024-11-18 06:51:52.964730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:00.038 [2024-11-18 06:51:52.964741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:00.038 [2024-11-18 06:51:52.964749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:00.038 [2024-11-18 06:51:52.964757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:00.038 [2024-11-18 06:51:52.964764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:00.038 [2024-11-18 06:51:52.964775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:00.038 [2024-11-18 06:51:52.964783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.038 [2024-11-18 06:51:52.964791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:00.038 [2024-11-18 06:51:52.964799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:00.038 [2024-11-18 06:51:52.964807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.038 [2024-11-18 06:51:52.964815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:00.038 [2024-11-18 06:51:52.964823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:00.038 [2024-11-18 06:51:52.964831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:00.038 [2024-11-18 06:51:52.964839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:00.038 [2024-11-18 06:51:52.964847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:00.038 [2024-11-18 06:51:52.964855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:00.038 [2024-11-18 06:51:52.964863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:00.038 [2024-11-18 06:51:52.964876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:00.038 [2024-11-18 06:51:52.964883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:00.038 [2024-11-18 06:51:52.964890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:00.038 [2024-11-18 06:51:52.964896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:00.038 [2024-11-18 06:51:52.964903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:00.038 [2024-11-18 06:51:52.964909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:00.039 [2024-11-18 06:51:52.964916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:00.039 [2024-11-18 06:51:52.964922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:00.039 [2024-11-18 06:51:52.964929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:00.039 [2024-11-18 06:51:52.964935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:00.039 [2024-11-18 06:51:52.964941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:00.039 [2024-11-18 06:51:52.964948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:00.039 [2024-11-18 06:51:52.964955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:00.039 [2024-11-18 06:51:52.964961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.039 [2024-11-18 06:51:52.964967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:00.039 [2024-11-18 06:51:52.964998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:00.039 [2024-11-18 06:51:52.965009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.039 [2024-11-18 06:51:52.965018] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:00.039 [2024-11-18 06:51:52.965027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:00.039 [2024-11-18 06:51:52.965037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:00.039 [2024-11-18 06:51:52.965046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.039 [2024-11-18 06:51:52.965059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:00.039 [2024-11-18 06:51:52.965066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:00.039 [2024-11-18 06:51:52.965074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:00.039 [2024-11-18 06:51:52.965081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:00.039 [2024-11-18 06:51:52.965088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:00.039 [2024-11-18 06:51:52.965095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:00.039 [2024-11-18 06:51:52.965104] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:00.039 [2024-11-18 06:51:52.965113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:00.039 [2024-11-18 06:51:52.965122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:00.039 [2024-11-18 06:51:52.965130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:00.039 [2024-11-18 06:51:52.965137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:00.039 [2024-11-18 06:51:52.965146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:00.039 [2024-11-18 06:51:52.965154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:00.039 [2024-11-18 06:51:52.965162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:00.039 [2024-11-18 06:51:52.965170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:00.039 [2024-11-18 06:51:52.965177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:00.039 [2024-11-18 06:51:52.965184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:00.039 [2024-11-18 06:51:52.965191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:00.039 [2024-11-18 06:51:52.965199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:00.039 [2024-11-18 06:51:52.965206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:00.039 [2024-11-18 06:51:52.965213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:00.039 [2024-11-18 06:51:52.965230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:00.039 [2024-11-18 06:51:52.965237] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:00.039 [2024-11-18 06:51:52.965249] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:00.039 [2024-11-18 06:51:52.965258] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:00.039 [2024-11-18 06:51:52.965265] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:00.039 [2024-11-18 06:51:52.965272] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:00.039 [2024-11-18 06:51:52.965281] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:00.039 [2024-11-18 06:51:52.965289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.039 [2024-11-18 06:51:52.965297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:00.039 [2024-11-18 06:51:52.965304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.722 ms 00:18:00.039 [2024-11-18 06:51:52.965312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.039 [2024-11-18 06:51:52.979096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.039 [2024-11-18 06:51:52.979288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:00.039 [2024-11-18 06:51:52.979306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.732 ms 00:18:00.039 [2024-11-18 06:51:52.979314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.039 [2024-11-18 06:51:52.979406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.039 [2024-11-18 06:51:52.979414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:00.039 [2024-11-18 06:51:52.979423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:18:00.039 [2024-11-18 06:51:52.979430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.039 [2024-11-18 06:51:52.999492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.039 [2024-11-18 06:51:52.999559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:00.039 [2024-11-18 06:51:52.999587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.999 ms 00:18:00.039 [2024-11-18 06:51:52.999599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.039 [2024-11-18 06:51:52.999662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.039 [2024-11-18 06:51:52.999676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:00.039 [2024-11-18 06:51:52.999694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:00.039 [2024-11-18 06:51:52.999709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.039 [2024-11-18 06:51:53.000361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.039 [2024-11-18 06:51:53.000403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:00.039 [2024-11-18 06:51:53.000419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.576 ms 00:18:00.039 [2024-11-18 06:51:53.000432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.039 [2024-11-18 06:51:53.000633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.039 [2024-11-18 06:51:53.000655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:00.039 [2024-11-18 06:51:53.000668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:18:00.039 [2024-11-18 06:51:53.000679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.039 [2024-11-18 06:51:53.009146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.039 [2024-11-18 06:51:53.009201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:00.039 [2024-11-18 06:51:53.009227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.439 ms 00:18:00.039 [2024-11-18 06:51:53.009244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.039 [2024-11-18 06:51:53.013149] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:00.039 [2024-11-18 06:51:53.013202] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:00.039 [2024-11-18 06:51:53.013215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.039 [2024-11-18 06:51:53.013224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:00.039 [2024-11-18 06:51:53.013233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.858 ms 00:18:00.039 [2024-11-18 06:51:53.013241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.039 [2024-11-18 06:51:53.029029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.039 [2024-11-18 06:51:53.029078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:00.039 [2024-11-18 06:51:53.029094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.728 ms 00:18:00.039 [2024-11-18 06:51:53.029102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.039 [2024-11-18 06:51:53.031870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.039 [2024-11-18 06:51:53.031916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:00.039 [2024-11-18 06:51:53.031926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.714 ms 00:18:00.039 [2024-11-18 06:51:53.031933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.039 [2024-11-18 06:51:53.034528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.039 [2024-11-18 06:51:53.034697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:00.039 [2024-11-18 06:51:53.034716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.529 ms 00:18:00.039 [2024-11-18 06:51:53.034724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.039 [2024-11-18 06:51:53.035120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.039 [2024-11-18 06:51:53.035138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:00.039 [2024-11-18 06:51:53.035148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:18:00.039 [2024-11-18 06:51:53.035157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.039 [2024-11-18 06:51:53.058341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.039 [2024-11-18 06:51:53.058415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:00.039 [2024-11-18 06:51:53.058429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.163 ms 00:18:00.039 [2024-11-18 06:51:53.058438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.040 [2024-11-18 06:51:53.066397] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:00.040 [2024-11-18 06:51:53.069371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.040 [2024-11-18 06:51:53.069533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:00.040 [2024-11-18 06:51:53.069562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.879 ms 00:18:00.040 [2024-11-18 06:51:53.069574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.040 [2024-11-18 06:51:53.069650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.040 [2024-11-18 06:51:53.069666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:00.040 [2024-11-18 06:51:53.069675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:00.040 [2024-11-18 06:51:53.069683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.040 [2024-11-18 06:51:53.069752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.040 [2024-11-18 06:51:53.069763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:00.040 [2024-11-18 06:51:53.069773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:00.040 [2024-11-18 06:51:53.069787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.040 [2024-11-18 06:51:53.069812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.040 [2024-11-18 06:51:53.069821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:00.040 [2024-11-18 06:51:53.069829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:00.040 [2024-11-18 06:51:53.069837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.040 [2024-11-18 06:51:53.069874] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:00.040 [2024-11-18 06:51:53.069885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.040 [2024-11-18 06:51:53.069893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:00.040 [2024-11-18 06:51:53.069902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:00.040 [2024-11-18 06:51:53.069910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.040 [2024-11-18 06:51:53.075512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.040 [2024-11-18 06:51:53.075561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:00.040 [2024-11-18 06:51:53.075572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.581 ms 00:18:00.040 [2024-11-18 06:51:53.075580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.040 [2024-11-18 06:51:53.075674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.040 [2024-11-18 06:51:53.075685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:00.040 [2024-11-18 06:51:53.075698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:18:00.040 [2024-11-18 06:51:53.075706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.040 [2024-11-18 06:51:53.076838] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 129.206 ms, result 0 00:18:01.427  [2024-11-18T06:51:55.456Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-18T06:51:56.399Z] Copying: 26/1024 [MB] (12 MBps) [2024-11-18T06:51:57.342Z] Copying: 44/1024 [MB] (18 MBps) [2024-11-18T06:51:58.287Z] Copying: 60/1024 [MB] (16 MBps) [2024-11-18T06:51:59.230Z] Copying: 80/1024 [MB] (20 MBps) [2024-11-18T06:52:00.173Z] Copying: 116/1024 [MB] (35 MBps) [2024-11-18T06:52:01.118Z] Copying: 137/1024 [MB] (21 MBps) [2024-11-18T06:52:02.506Z] Copying: 154/1024 [MB] (16 MBps) [2024-11-18T06:52:03.451Z] Copying: 168/1024 [MB] (14 MBps) [2024-11-18T06:52:04.395Z] Copying: 199/1024 [MB] (30 MBps) [2024-11-18T06:52:05.338Z] Copying: 214/1024 [MB] (14 MBps) [2024-11-18T06:52:06.286Z] Copying: 230/1024 [MB] (15 MBps) [2024-11-18T06:52:07.232Z] Copying: 245/1024 [MB] (14 MBps) [2024-11-18T06:52:08.176Z] Copying: 274/1024 [MB] (29 MBps) [2024-11-18T06:52:09.195Z] Copying: 325/1024 [MB] (50 MBps) [2024-11-18T06:52:10.149Z] Copying: 366/1024 [MB] (41 MBps) [2024-11-18T06:52:11.092Z] Copying: 386/1024 [MB] (19 MBps) [2024-11-18T06:52:12.482Z] Copying: 403/1024 [MB] (17 MBps) [2024-11-18T06:52:13.428Z] Copying: 417/1024 [MB] (13 MBps) [2024-11-18T06:52:14.373Z] Copying: 431/1024 [MB] (14 MBps) [2024-11-18T06:52:15.317Z] Copying: 447/1024 [MB] (15 MBps) [2024-11-18T06:52:16.261Z] Copying: 462/1024 [MB] (15 MBps) [2024-11-18T06:52:17.202Z] Copying: 472/1024 [MB] (10 MBps) [2024-11-18T06:52:18.145Z] Copying: 487/1024 [MB] (14 MBps) [2024-11-18T06:52:19.528Z] Copying: 504/1024 [MB] (17 MBps) [2024-11-18T06:52:20.099Z] Copying: 521/1024 [MB] (16 MBps) [2024-11-18T06:52:21.487Z] Copying: 531/1024 [MB] (10 MBps) [2024-11-18T06:52:22.429Z] Copying: 541/1024 [MB] (10 MBps) [2024-11-18T06:52:23.374Z] Copying: 551/1024 [MB] (10 MBps) [2024-11-18T06:52:24.320Z] Copying: 562/1024 [MB] (10 MBps) [2024-11-18T06:52:25.265Z] Copying: 572/1024 [MB] (10 MBps) [2024-11-18T06:52:26.209Z] Copying: 582/1024 [MB] (10 MBps) [2024-11-18T06:52:27.153Z] Copying: 621/1024 [MB] (38 MBps) [2024-11-18T06:52:28.097Z] Copying: 632/1024 [MB] (10 MBps) [2024-11-18T06:52:29.485Z] Copying: 642/1024 [MB] (10 MBps) [2024-11-18T06:52:30.429Z] Copying: 654/1024 [MB] (11 MBps) [2024-11-18T06:52:31.374Z] Copying: 664/1024 [MB] (10 MBps) [2024-11-18T06:52:32.319Z] Copying: 677/1024 [MB] (12 MBps) [2024-11-18T06:52:33.263Z] Copying: 687/1024 [MB] (10 MBps) [2024-11-18T06:52:34.206Z] Copying: 697/1024 [MB] (10 MBps) [2024-11-18T06:52:35.150Z] Copying: 708/1024 [MB] (10 MBps) [2024-11-18T06:52:36.094Z] Copying: 741/1024 [MB] (33 MBps) [2024-11-18T06:52:37.484Z] Copying: 753/1024 [MB] (12 MBps) [2024-11-18T06:52:38.427Z] Copying: 766/1024 [MB] (12 MBps) [2024-11-18T06:52:39.370Z] Copying: 776/1024 [MB] (10 MBps) [2024-11-18T06:52:40.386Z] Copying: 788/1024 [MB] (12 MBps) [2024-11-18T06:52:41.331Z] Copying: 799/1024 [MB] (11 MBps) [2024-11-18T06:52:42.277Z] Copying: 812/1024 [MB] (12 MBps) [2024-11-18T06:52:43.223Z] Copying: 829/1024 [MB] (16 MBps) [2024-11-18T06:52:44.168Z] Copying: 843/1024 [MB] (14 MBps) [2024-11-18T06:52:45.114Z] Copying: 853/1024 [MB] (10 MBps) [2024-11-18T06:52:46.506Z] Copying: 869/1024 [MB] (15 MBps) [2024-11-18T06:52:47.450Z] Copying: 889/1024 [MB] (19 MBps) [2024-11-18T06:52:48.394Z] Copying: 914/1024 [MB] (24 MBps) [2024-11-18T06:52:49.337Z] Copying: 926/1024 [MB] (12 MBps) [2024-11-18T06:52:50.280Z] Copying: 936/1024 [MB] (10 MBps) [2024-11-18T06:52:51.224Z] Copying: 951/1024 [MB] (14 MBps) [2024-11-18T06:52:52.166Z] Copying: 964/1024 [MB] (12 MBps) [2024-11-18T06:52:53.109Z] Copying: 974/1024 [MB] (10 MBps) [2024-11-18T06:52:54.495Z] Copying: 985/1024 [MB] (10 MBps) [2024-11-18T06:52:55.437Z] Copying: 995/1024 [MB] (10 MBps) [2024-11-18T06:52:56.381Z] Copying: 1006/1024 [MB] (11 MBps) [2024-11-18T06:52:56.382Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-18 06:52:56.058707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.295 [2024-11-18 06:52:56.058767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:03.295 [2024-11-18 06:52:56.058803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:03.295 [2024-11-18 06:52:56.058818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.295 [2024-11-18 06:52:56.058844] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:03.295 [2024-11-18 06:52:56.059687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.295 [2024-11-18 06:52:56.059729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:03.295 [2024-11-18 06:52:56.059750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.824 ms 00:19:03.295 [2024-11-18 06:52:56.059759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.295 [2024-11-18 06:52:56.061708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.295 [2024-11-18 06:52:56.061880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:03.295 [2024-11-18 06:52:56.061899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.919 ms 00:19:03.295 [2024-11-18 06:52:56.061907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.295 [2024-11-18 06:52:56.078636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.295 [2024-11-18 06:52:56.078691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:03.295 [2024-11-18 06:52:56.078708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.706 ms 00:19:03.295 [2024-11-18 06:52:56.078716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.295 [2024-11-18 06:52:56.084920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.295 [2024-11-18 06:52:56.085110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:03.295 [2024-11-18 06:52:56.085132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.163 ms 00:19:03.295 [2024-11-18 06:52:56.085140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.295 [2024-11-18 06:52:56.088107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.295 [2024-11-18 06:52:56.088271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:03.295 [2024-11-18 06:52:56.088288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.894 ms 00:19:03.295 [2024-11-18 06:52:56.088297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.295 [2024-11-18 06:52:56.093268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.295 [2024-11-18 06:52:56.093320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:03.295 [2024-11-18 06:52:56.093331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.866 ms 00:19:03.295 [2024-11-18 06:52:56.093339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.295 [2024-11-18 06:52:56.093461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.295 [2024-11-18 06:52:56.093470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:03.295 [2024-11-18 06:52:56.093480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:19:03.295 [2024-11-18 06:52:56.093488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.295 [2024-11-18 06:52:56.096646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.295 [2024-11-18 06:52:56.096708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:03.295 [2024-11-18 06:52:56.096717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.139 ms 00:19:03.295 [2024-11-18 06:52:56.096723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.295 [2024-11-18 06:52:56.099354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.295 [2024-11-18 06:52:56.099521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:03.295 [2024-11-18 06:52:56.099539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.591 ms 00:19:03.295 [2024-11-18 06:52:56.099547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.295 [2024-11-18 06:52:56.101893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.295 [2024-11-18 06:52:56.101941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:03.295 [2024-11-18 06:52:56.101951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.245 ms 00:19:03.295 [2024-11-18 06:52:56.101958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.295 [2024-11-18 06:52:56.104023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.295 [2024-11-18 06:52:56.104067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:03.295 [2024-11-18 06:52:56.104076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.988 ms 00:19:03.295 [2024-11-18 06:52:56.104083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.295 [2024-11-18 06:52:56.104119] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:03.295 [2024-11-18 06:52:56.104142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:03.295 [2024-11-18 06:52:56.104480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:03.296 [2024-11-18 06:52:56.104898] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:03.296 [2024-11-18 06:52:56.104906] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0cc73ce1-3fae-4d36-91d2-6119a87f6d65 00:19:03.296 [2024-11-18 06:52:56.104915] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:03.296 [2024-11-18 06:52:56.104922] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:03.296 [2024-11-18 06:52:56.104929] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:03.296 [2024-11-18 06:52:56.104937] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:03.296 [2024-11-18 06:52:56.104946] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:03.296 [2024-11-18 06:52:56.104954] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:03.296 [2024-11-18 06:52:56.104962] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:03.296 [2024-11-18 06:52:56.104969] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:03.296 [2024-11-18 06:52:56.104999] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:03.296 [2024-11-18 06:52:56.105007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.296 [2024-11-18 06:52:56.105023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:03.296 [2024-11-18 06:52:56.105038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.889 ms 00:19:03.296 [2024-11-18 06:52:56.105050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.296 [2024-11-18 06:52:56.107431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.296 [2024-11-18 06:52:56.107465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:03.296 [2024-11-18 06:52:56.107476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.362 ms 00:19:03.296 [2024-11-18 06:52:56.107484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.296 [2024-11-18 06:52:56.107606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.296 [2024-11-18 06:52:56.107623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:03.296 [2024-11-18 06:52:56.107632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:19:03.296 [2024-11-18 06:52:56.107640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.296 [2024-11-18 06:52:56.114946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:03.296 [2024-11-18 06:52:56.115159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:03.296 [2024-11-18 06:52:56.115178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:03.296 [2024-11-18 06:52:56.115187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.296 [2024-11-18 06:52:56.115246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:03.296 [2024-11-18 06:52:56.115261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:03.296 [2024-11-18 06:52:56.115275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:03.296 [2024-11-18 06:52:56.115283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.296 [2024-11-18 06:52:56.115357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:03.296 [2024-11-18 06:52:56.115368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:03.296 [2024-11-18 06:52:56.115376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:03.296 [2024-11-18 06:52:56.115383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.296 [2024-11-18 06:52:56.115399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:03.296 [2024-11-18 06:52:56.115407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:03.296 [2024-11-18 06:52:56.115418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:03.296 [2024-11-18 06:52:56.115425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.296 [2024-11-18 06:52:56.129274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:03.296 [2024-11-18 06:52:56.129328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:03.296 [2024-11-18 06:52:56.129340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:03.296 [2024-11-18 06:52:56.129348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.297 [2024-11-18 06:52:56.140653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:03.297 [2024-11-18 06:52:56.140707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:03.297 [2024-11-18 06:52:56.140727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:03.297 [2024-11-18 06:52:56.140735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.297 [2024-11-18 06:52:56.140790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:03.297 [2024-11-18 06:52:56.140801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:03.297 [2024-11-18 06:52:56.140817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:03.297 [2024-11-18 06:52:56.140825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.297 [2024-11-18 06:52:56.140866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:03.297 [2024-11-18 06:52:56.140876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:03.297 [2024-11-18 06:52:56.140884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:03.297 [2024-11-18 06:52:56.140895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.297 [2024-11-18 06:52:56.140964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:03.297 [2024-11-18 06:52:56.141008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:03.297 [2024-11-18 06:52:56.141017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:03.297 [2024-11-18 06:52:56.141026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.297 [2024-11-18 06:52:56.141068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:03.297 [2024-11-18 06:52:56.141078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:03.297 [2024-11-18 06:52:56.141090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:03.297 [2024-11-18 06:52:56.141102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.297 [2024-11-18 06:52:56.141147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:03.297 [2024-11-18 06:52:56.141164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:03.297 [2024-11-18 06:52:56.141176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:03.297 [2024-11-18 06:52:56.141188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.297 [2024-11-18 06:52:56.141257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:03.297 [2024-11-18 06:52:56.141273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:03.297 [2024-11-18 06:52:56.141286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:03.297 [2024-11-18 06:52:56.141302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.297 [2024-11-18 06:52:56.141485] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 82.722 ms, result 0 00:19:03.297 00:19:03.297 00:19:03.557 06:52:56 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:03.557 [2024-11-18 06:52:56.448389] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:19:03.557 [2024-11-18 06:52:56.448545] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86665 ] 00:19:03.557 [2024-11-18 06:52:56.609370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.557 [2024-11-18 06:52:56.637898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.817 [2024-11-18 06:52:56.752768] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:03.817 [2024-11-18 06:52:56.753143] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:04.080 [2024-11-18 06:52:56.913825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.080 [2024-11-18 06:52:56.913886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:04.080 [2024-11-18 06:52:56.913907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:04.080 [2024-11-18 06:52:56.913919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.080 [2024-11-18 06:52:56.914007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.080 [2024-11-18 06:52:56.914019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:04.080 [2024-11-18 06:52:56.914028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:04.080 [2024-11-18 06:52:56.914040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.080 [2024-11-18 06:52:56.914065] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:04.080 [2024-11-18 06:52:56.914485] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:04.080 [2024-11-18 06:52:56.914518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.080 [2024-11-18 06:52:56.914526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:04.080 [2024-11-18 06:52:56.914536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.458 ms 00:19:04.080 [2024-11-18 06:52:56.914548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.080 [2024-11-18 06:52:56.916293] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:04.080 [2024-11-18 06:52:56.919889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.080 [2024-11-18 06:52:56.919939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:04.080 [2024-11-18 06:52:56.919951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.598 ms 00:19:04.080 [2024-11-18 06:52:56.919965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.080 [2024-11-18 06:52:56.920056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.080 [2024-11-18 06:52:56.920069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:04.080 [2024-11-18 06:52:56.920078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:19:04.080 [2024-11-18 06:52:56.920092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.080 [2024-11-18 06:52:56.928072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.080 [2024-11-18 06:52:56.928122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:04.080 [2024-11-18 06:52:56.928135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.937 ms 00:19:04.080 [2024-11-18 06:52:56.928143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.080 [2024-11-18 06:52:56.928241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.080 [2024-11-18 06:52:56.928251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:04.080 [2024-11-18 06:52:56.928260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:04.080 [2024-11-18 06:52:56.928270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.080 [2024-11-18 06:52:56.928326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.080 [2024-11-18 06:52:56.928336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:04.080 [2024-11-18 06:52:56.928345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:04.080 [2024-11-18 06:52:56.928352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.080 [2024-11-18 06:52:56.928379] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:04.080 [2024-11-18 06:52:56.930363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.080 [2024-11-18 06:52:56.930395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:04.080 [2024-11-18 06:52:56.930405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.990 ms 00:19:04.080 [2024-11-18 06:52:56.930413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.080 [2024-11-18 06:52:56.930447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.080 [2024-11-18 06:52:56.930456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:04.080 [2024-11-18 06:52:56.930465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:04.080 [2024-11-18 06:52:56.930473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.080 [2024-11-18 06:52:56.930508] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:04.080 [2024-11-18 06:52:56.930531] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:04.080 [2024-11-18 06:52:56.930569] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:04.080 [2024-11-18 06:52:56.930590] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:04.080 [2024-11-18 06:52:56.930697] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:04.080 [2024-11-18 06:52:56.930708] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:04.080 [2024-11-18 06:52:56.930720] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:04.080 [2024-11-18 06:52:56.930735] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:04.080 [2024-11-18 06:52:56.930744] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:04.080 [2024-11-18 06:52:56.930753] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:04.080 [2024-11-18 06:52:56.930765] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:04.080 [2024-11-18 06:52:56.930773] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:04.080 [2024-11-18 06:52:56.930799] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:04.080 [2024-11-18 06:52:56.930808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.080 [2024-11-18 06:52:56.930816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:04.080 [2024-11-18 06:52:56.930825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:19:04.080 [2024-11-18 06:52:56.930834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.080 [2024-11-18 06:52:56.930921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.080 [2024-11-18 06:52:56.930932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:04.080 [2024-11-18 06:52:56.930941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:04.080 [2024-11-18 06:52:56.930952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.080 [2024-11-18 06:52:56.931086] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:04.080 [2024-11-18 06:52:56.931104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:04.080 [2024-11-18 06:52:56.931114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:04.080 [2024-11-18 06:52:56.931123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.080 [2024-11-18 06:52:56.931133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:04.080 [2024-11-18 06:52:56.931147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:04.080 [2024-11-18 06:52:56.931156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:04.080 [2024-11-18 06:52:56.931165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:04.080 [2024-11-18 06:52:56.931174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:04.080 [2024-11-18 06:52:56.931181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:04.080 [2024-11-18 06:52:56.931193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:04.080 [2024-11-18 06:52:56.931202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:04.080 [2024-11-18 06:52:56.931209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:04.080 [2024-11-18 06:52:56.931217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:04.080 [2024-11-18 06:52:56.931225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:04.080 [2024-11-18 06:52:56.931233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.080 [2024-11-18 06:52:56.931242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:04.080 [2024-11-18 06:52:56.931253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:04.080 [2024-11-18 06:52:56.931261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.081 [2024-11-18 06:52:56.931270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:04.081 [2024-11-18 06:52:56.931278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:04.081 [2024-11-18 06:52:56.931287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:04.081 [2024-11-18 06:52:56.931295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:04.081 [2024-11-18 06:52:56.931303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:04.081 [2024-11-18 06:52:56.931311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:04.081 [2024-11-18 06:52:56.931319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:04.081 [2024-11-18 06:52:56.931335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:04.081 [2024-11-18 06:52:56.931343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:04.081 [2024-11-18 06:52:56.931351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:04.081 [2024-11-18 06:52:56.931359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:04.081 [2024-11-18 06:52:56.931366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:04.081 [2024-11-18 06:52:56.931374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:04.081 [2024-11-18 06:52:56.931382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:04.081 [2024-11-18 06:52:56.931389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:04.081 [2024-11-18 06:52:56.931397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:04.081 [2024-11-18 06:52:56.931405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:04.081 [2024-11-18 06:52:56.931412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:04.081 [2024-11-18 06:52:56.931419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:04.081 [2024-11-18 06:52:56.931427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:04.081 [2024-11-18 06:52:56.931435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.081 [2024-11-18 06:52:56.931442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:04.081 [2024-11-18 06:52:56.931449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:04.081 [2024-11-18 06:52:56.931459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.081 [2024-11-18 06:52:56.931467] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:04.081 [2024-11-18 06:52:56.931476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:04.081 [2024-11-18 06:52:56.931487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:04.081 [2024-11-18 06:52:56.931495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.081 [2024-11-18 06:52:56.931504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:04.081 [2024-11-18 06:52:56.931511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:04.081 [2024-11-18 06:52:56.931519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:04.081 [2024-11-18 06:52:56.931526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:04.081 [2024-11-18 06:52:56.931533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:04.081 [2024-11-18 06:52:56.931539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:04.081 [2024-11-18 06:52:56.931549] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:04.081 [2024-11-18 06:52:56.931558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:04.081 [2024-11-18 06:52:56.931567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:04.081 [2024-11-18 06:52:56.931574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:04.081 [2024-11-18 06:52:56.931581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:04.081 [2024-11-18 06:52:56.931590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:04.081 [2024-11-18 06:52:56.931597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:04.081 [2024-11-18 06:52:56.931605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:04.081 [2024-11-18 06:52:56.931611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:04.081 [2024-11-18 06:52:56.931618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:04.081 [2024-11-18 06:52:56.931625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:04.081 [2024-11-18 06:52:56.931632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:04.081 [2024-11-18 06:52:56.931639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:04.081 [2024-11-18 06:52:56.931647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:04.081 [2024-11-18 06:52:56.931653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:04.081 [2024-11-18 06:52:56.931660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:04.081 [2024-11-18 06:52:56.931667] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:04.081 [2024-11-18 06:52:56.931676] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:04.081 [2024-11-18 06:52:56.931684] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:04.081 [2024-11-18 06:52:56.931691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:04.081 [2024-11-18 06:52:56.931698] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:04.081 [2024-11-18 06:52:56.931708] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:04.081 [2024-11-18 06:52:56.931716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.081 [2024-11-18 06:52:56.931727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:04.081 [2024-11-18 06:52:56.931735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.700 ms 00:19:04.081 [2024-11-18 06:52:56.931742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.081 [2024-11-18 06:52:56.945589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.081 [2024-11-18 06:52:56.945635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:04.081 [2024-11-18 06:52:56.945653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.796 ms 00:19:04.081 [2024-11-18 06:52:56.945661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.081 [2024-11-18 06:52:56.945751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.081 [2024-11-18 06:52:56.945764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:04.081 [2024-11-18 06:52:56.945773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:04.081 [2024-11-18 06:52:56.945781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.081 [2024-11-18 06:52:56.970690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.081 [2024-11-18 06:52:56.970761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:04.081 [2024-11-18 06:52:56.970799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.850 ms 00:19:04.081 [2024-11-18 06:52:56.970817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.081 [2024-11-18 06:52:56.970883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.081 [2024-11-18 06:52:56.970900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:04.081 [2024-11-18 06:52:56.970914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:04.081 [2024-11-18 06:52:56.970933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.081 [2024-11-18 06:52:56.971574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.081 [2024-11-18 06:52:56.971935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:04.081 [2024-11-18 06:52:56.971967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:19:04.081 [2024-11-18 06:52:56.972008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.081 [2024-11-18 06:52:56.972241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.081 [2024-11-18 06:52:56.972257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:04.081 [2024-11-18 06:52:56.972270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:19:04.081 [2024-11-18 06:52:56.972282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.081 [2024-11-18 06:52:56.980635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.081 [2024-11-18 06:52:56.980687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:04.081 [2024-11-18 06:52:56.980706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.324 ms 00:19:04.081 [2024-11-18 06:52:56.980714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.081 [2024-11-18 06:52:56.984711] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:04.081 [2024-11-18 06:52:56.984762] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:04.081 [2024-11-18 06:52:56.984775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.081 [2024-11-18 06:52:56.984784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:04.081 [2024-11-18 06:52:56.984793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.960 ms 00:19:04.081 [2024-11-18 06:52:56.984802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.081 [2024-11-18 06:52:57.000609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.081 [2024-11-18 06:52:57.000664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:04.082 [2024-11-18 06:52:57.000677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.749 ms 00:19:04.082 [2024-11-18 06:52:57.000686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.082 [2024-11-18 06:52:57.003626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.082 [2024-11-18 06:52:57.003674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:04.082 [2024-11-18 06:52:57.003685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.794 ms 00:19:04.082 [2024-11-18 06:52:57.003693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.082 [2024-11-18 06:52:57.006241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.082 [2024-11-18 06:52:57.006406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:04.082 [2024-11-18 06:52:57.006425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.502 ms 00:19:04.082 [2024-11-18 06:52:57.006433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.082 [2024-11-18 06:52:57.006849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.082 [2024-11-18 06:52:57.006873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:04.082 [2024-11-18 06:52:57.006884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:19:04.082 [2024-11-18 06:52:57.006893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.082 [2024-11-18 06:52:57.031644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.082 [2024-11-18 06:52:57.031720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:04.082 [2024-11-18 06:52:57.031735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.725 ms 00:19:04.082 [2024-11-18 06:52:57.031744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.082 [2024-11-18 06:52:57.040029] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:04.082 [2024-11-18 06:52:57.043209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.082 [2024-11-18 06:52:57.043264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:04.082 [2024-11-18 06:52:57.043276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.410 ms 00:19:04.082 [2024-11-18 06:52:57.043289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.082 [2024-11-18 06:52:57.043370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.082 [2024-11-18 06:52:57.043382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:04.082 [2024-11-18 06:52:57.043391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:04.082 [2024-11-18 06:52:57.043406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.082 [2024-11-18 06:52:57.043478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.082 [2024-11-18 06:52:57.043494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:04.082 [2024-11-18 06:52:57.043504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:04.082 [2024-11-18 06:52:57.043513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.082 [2024-11-18 06:52:57.043535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.082 [2024-11-18 06:52:57.043543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:04.082 [2024-11-18 06:52:57.043552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:04.082 [2024-11-18 06:52:57.043559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.082 [2024-11-18 06:52:57.043597] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:04.082 [2024-11-18 06:52:57.043611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.082 [2024-11-18 06:52:57.043620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:04.082 [2024-11-18 06:52:57.043628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:04.082 [2024-11-18 06:52:57.043638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.082 [2024-11-18 06:52:57.049444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.082 [2024-11-18 06:52:57.049610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:04.082 [2024-11-18 06:52:57.049629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.787 ms 00:19:04.082 [2024-11-18 06:52:57.049638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.082 [2024-11-18 06:52:57.049969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.082 [2024-11-18 06:52:57.050033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:04.082 [2024-11-18 06:52:57.050051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:19:04.082 [2024-11-18 06:52:57.050065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.082 [2024-11-18 06:52:57.051296] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 136.969 ms, result 0 00:19:05.467  [2024-11-18T06:52:59.497Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-18T06:53:00.441Z] Copying: 21/1024 [MB] (10 MBps) [2024-11-18T06:53:01.382Z] Copying: 32/1024 [MB] (10 MBps) [2024-11-18T06:53:02.326Z] Copying: 42/1024 [MB] (10 MBps) [2024-11-18T06:53:03.332Z] Copying: 53/1024 [MB] (10 MBps) [2024-11-18T06:53:04.276Z] Copying: 64/1024 [MB] (10 MBps) [2024-11-18T06:53:05.664Z] Copying: 75/1024 [MB] (10 MBps) [2024-11-18T06:53:06.237Z] Copying: 86/1024 [MB] (10 MBps) [2024-11-18T06:53:07.629Z] Copying: 97/1024 [MB] (10 MBps) [2024-11-18T06:53:08.573Z] Copying: 107/1024 [MB] (10 MBps) [2024-11-18T06:53:09.515Z] Copying: 118/1024 [MB] (10 MBps) [2024-11-18T06:53:10.460Z] Copying: 129/1024 [MB] (11 MBps) [2024-11-18T06:53:11.495Z] Copying: 151/1024 [MB] (21 MBps) [2024-11-18T06:53:12.452Z] Copying: 165/1024 [MB] (14 MBps) [2024-11-18T06:53:13.395Z] Copying: 177/1024 [MB] (12 MBps) [2024-11-18T06:53:14.338Z] Copying: 193/1024 [MB] (15 MBps) [2024-11-18T06:53:15.282Z] Copying: 205/1024 [MB] (12 MBps) [2024-11-18T06:53:16.669Z] Copying: 222/1024 [MB] (16 MBps) [2024-11-18T06:53:17.242Z] Copying: 235/1024 [MB] (13 MBps) [2024-11-18T06:53:18.629Z] Copying: 249/1024 [MB] (13 MBps) [2024-11-18T06:53:19.573Z] Copying: 261/1024 [MB] (12 MBps) [2024-11-18T06:53:20.516Z] Copying: 274/1024 [MB] (12 MBps) [2024-11-18T06:53:21.460Z] Copying: 287/1024 [MB] (12 MBps) [2024-11-18T06:53:22.403Z] Copying: 298/1024 [MB] (11 MBps) [2024-11-18T06:53:23.346Z] Copying: 316/1024 [MB] (18 MBps) [2024-11-18T06:53:24.288Z] Copying: 329/1024 [MB] (12 MBps) [2024-11-18T06:53:25.674Z] Copying: 340/1024 [MB] (10 MBps) [2024-11-18T06:53:26.245Z] Copying: 350/1024 [MB] (10 MBps) [2024-11-18T06:53:27.631Z] Copying: 361/1024 [MB] (10 MBps) [2024-11-18T06:53:28.575Z] Copying: 371/1024 [MB] (10 MBps) [2024-11-18T06:53:29.518Z] Copying: 382/1024 [MB] (10 MBps) [2024-11-18T06:53:30.460Z] Copying: 392/1024 [MB] (10 MBps) [2024-11-18T06:53:31.403Z] Copying: 403/1024 [MB] (10 MBps) [2024-11-18T06:53:32.348Z] Copying: 413/1024 [MB] (10 MBps) [2024-11-18T06:53:33.292Z] Copying: 424/1024 [MB] (10 MBps) [2024-11-18T06:53:34.235Z] Copying: 437/1024 [MB] (12 MBps) [2024-11-18T06:53:35.622Z] Copying: 451/1024 [MB] (14 MBps) [2024-11-18T06:53:36.565Z] Copying: 469/1024 [MB] (18 MBps) [2024-11-18T06:53:37.508Z] Copying: 480/1024 [MB] (10 MBps) [2024-11-18T06:53:38.451Z] Copying: 490/1024 [MB] (10 MBps) [2024-11-18T06:53:39.465Z] Copying: 501/1024 [MB] (10 MBps) [2024-11-18T06:53:40.409Z] Copying: 512/1024 [MB] (10 MBps) [2024-11-18T06:53:41.355Z] Copying: 528/1024 [MB] (16 MBps) [2024-11-18T06:53:42.300Z] Copying: 547/1024 [MB] (18 MBps) [2024-11-18T06:53:43.304Z] Copying: 567/1024 [MB] (20 MBps) [2024-11-18T06:53:44.272Z] Copying: 585/1024 [MB] (17 MBps) [2024-11-18T06:53:45.660Z] Copying: 601/1024 [MB] (15 MBps) [2024-11-18T06:53:46.604Z] Copying: 614/1024 [MB] (13 MBps) [2024-11-18T06:53:47.547Z] Copying: 630/1024 [MB] (16 MBps) [2024-11-18T06:53:48.490Z] Copying: 652/1024 [MB] (21 MBps) [2024-11-18T06:53:49.431Z] Copying: 669/1024 [MB] (16 MBps) [2024-11-18T06:53:50.373Z] Copying: 684/1024 [MB] (15 MBps) [2024-11-18T06:53:51.317Z] Copying: 701/1024 [MB] (17 MBps) [2024-11-18T06:53:52.262Z] Copying: 714/1024 [MB] (13 MBps) [2024-11-18T06:53:53.651Z] Copying: 733/1024 [MB] (18 MBps) [2024-11-18T06:53:54.596Z] Copying: 749/1024 [MB] (16 MBps) [2024-11-18T06:53:55.541Z] Copying: 769/1024 [MB] (19 MBps) [2024-11-18T06:53:56.486Z] Copying: 787/1024 [MB] (17 MBps) [2024-11-18T06:53:57.431Z] Copying: 802/1024 [MB] (15 MBps) [2024-11-18T06:53:58.373Z] Copying: 819/1024 [MB] (16 MBps) [2024-11-18T06:53:59.317Z] Copying: 830/1024 [MB] (10 MBps) [2024-11-18T06:54:00.261Z] Copying: 843/1024 [MB] (12 MBps) [2024-11-18T06:54:01.654Z] Copying: 856/1024 [MB] (13 MBps) [2024-11-18T06:54:02.602Z] Copying: 876/1024 [MB] (19 MBps) [2024-11-18T06:54:03.547Z] Copying: 887/1024 [MB] (11 MBps) [2024-11-18T06:54:04.490Z] Copying: 906/1024 [MB] (18 MBps) [2024-11-18T06:54:05.433Z] Copying: 923/1024 [MB] (17 MBps) [2024-11-18T06:54:06.375Z] Copying: 934/1024 [MB] (11 MBps) [2024-11-18T06:54:07.317Z] Copying: 945/1024 [MB] (10 MBps) [2024-11-18T06:54:08.260Z] Copying: 971/1024 [MB] (25 MBps) [2024-11-18T06:54:09.644Z] Copying: 984/1024 [MB] (12 MBps) [2024-11-18T06:54:10.588Z] Copying: 999/1024 [MB] (14 MBps) [2024-11-18T06:54:10.589Z] Copying: 1016/1024 [MB] (17 MBps) [2024-11-18T06:54:10.589Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-18 06:54:10.568550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.502 [2024-11-18 06:54:10.568633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:17.502 [2024-11-18 06:54:10.568652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:17.502 [2024-11-18 06:54:10.568670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.502 [2024-11-18 06:54:10.568703] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:17.502 [2024-11-18 06:54:10.569265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.502 [2024-11-18 06:54:10.569297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:17.502 [2024-11-18 06:54:10.569311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:20:17.502 [2024-11-18 06:54:10.569323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.502 [2024-11-18 06:54:10.569655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.502 [2024-11-18 06:54:10.569680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:17.502 [2024-11-18 06:54:10.569694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:20:17.502 [2024-11-18 06:54:10.569706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.502 [2024-11-18 06:54:10.576037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.502 [2024-11-18 06:54:10.576072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:17.502 [2024-11-18 06:54:10.576085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.304 ms 00:20:17.502 [2024-11-18 06:54:10.576097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.502 [2024-11-18 06:54:10.582342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.502 [2024-11-18 06:54:10.582367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:17.502 [2024-11-18 06:54:10.582374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.224 ms 00:20:17.502 [2024-11-18 06:54:10.582381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.502 [2024-11-18 06:54:10.583816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.502 [2024-11-18 06:54:10.583845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:17.502 [2024-11-18 06:54:10.583852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.353 ms 00:20:17.502 [2024-11-18 06:54:10.583857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.764 [2024-11-18 06:54:10.587007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.764 [2024-11-18 06:54:10.587036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:17.764 [2024-11-18 06:54:10.587043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.128 ms 00:20:17.764 [2024-11-18 06:54:10.587049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.764 [2024-11-18 06:54:10.587131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.764 [2024-11-18 06:54:10.587139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:17.764 [2024-11-18 06:54:10.587144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:20:17.764 [2024-11-18 06:54:10.587155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.764 [2024-11-18 06:54:10.588793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.764 [2024-11-18 06:54:10.588819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:17.764 [2024-11-18 06:54:10.588826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.620 ms 00:20:17.764 [2024-11-18 06:54:10.588831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.764 [2024-11-18 06:54:10.590009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.764 [2024-11-18 06:54:10.590035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:17.764 [2024-11-18 06:54:10.590042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.156 ms 00:20:17.764 [2024-11-18 06:54:10.590048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.764 [2024-11-18 06:54:10.591023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.764 [2024-11-18 06:54:10.591050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:17.764 [2024-11-18 06:54:10.591057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.952 ms 00:20:17.764 [2024-11-18 06:54:10.591062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.764 [2024-11-18 06:54:10.592052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.764 [2024-11-18 06:54:10.592079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:17.764 [2024-11-18 06:54:10.592086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.948 ms 00:20:17.764 [2024-11-18 06:54:10.592092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.764 [2024-11-18 06:54:10.592114] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:17.764 [2024-11-18 06:54:10.592125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:17.764 [2024-11-18 06:54:10.592366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:17.765 [2024-11-18 06:54:10.592710] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:17.765 [2024-11-18 06:54:10.592716] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0cc73ce1-3fae-4d36-91d2-6119a87f6d65 00:20:17.765 [2024-11-18 06:54:10.592723] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:17.765 [2024-11-18 06:54:10.592729] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:17.765 [2024-11-18 06:54:10.592735] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:17.765 [2024-11-18 06:54:10.592741] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:17.765 [2024-11-18 06:54:10.592746] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:17.765 [2024-11-18 06:54:10.592752] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:17.765 [2024-11-18 06:54:10.592758] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:17.765 [2024-11-18 06:54:10.592763] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:17.765 [2024-11-18 06:54:10.592768] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:17.765 [2024-11-18 06:54:10.592782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.765 [2024-11-18 06:54:10.592792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:17.765 [2024-11-18 06:54:10.592798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.669 ms 00:20:17.765 [2024-11-18 06:54:10.592804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.765 [2024-11-18 06:54:10.594000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.765 [2024-11-18 06:54:10.594020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:17.765 [2024-11-18 06:54:10.594027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.185 ms 00:20:17.765 [2024-11-18 06:54:10.594033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.765 [2024-11-18 06:54:10.594110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.765 [2024-11-18 06:54:10.594117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:17.765 [2024-11-18 06:54:10.594123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:17.765 [2024-11-18 06:54:10.594128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.765 [2024-11-18 06:54:10.598148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.765 [2024-11-18 06:54:10.598173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:17.765 [2024-11-18 06:54:10.598180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.765 [2024-11-18 06:54:10.598186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.765 [2024-11-18 06:54:10.598231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.765 [2024-11-18 06:54:10.598238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:17.765 [2024-11-18 06:54:10.598244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.765 [2024-11-18 06:54:10.598249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.765 [2024-11-18 06:54:10.598277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.765 [2024-11-18 06:54:10.598284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:17.765 [2024-11-18 06:54:10.598290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.765 [2024-11-18 06:54:10.598296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.765 [2024-11-18 06:54:10.598310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.765 [2024-11-18 06:54:10.598317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:17.765 [2024-11-18 06:54:10.598324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.765 [2024-11-18 06:54:10.598329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.765 [2024-11-18 06:54:10.605657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.766 [2024-11-18 06:54:10.605689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:17.766 [2024-11-18 06:54:10.605697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.766 [2024-11-18 06:54:10.605702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.766 [2024-11-18 06:54:10.611664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.766 [2024-11-18 06:54:10.611702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:17.766 [2024-11-18 06:54:10.611710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.766 [2024-11-18 06:54:10.611720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.766 [2024-11-18 06:54:10.611753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.766 [2024-11-18 06:54:10.611760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:17.766 [2024-11-18 06:54:10.611766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.766 [2024-11-18 06:54:10.611772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.766 [2024-11-18 06:54:10.611791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.766 [2024-11-18 06:54:10.611797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:17.766 [2024-11-18 06:54:10.611807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.766 [2024-11-18 06:54:10.611812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.766 [2024-11-18 06:54:10.611861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.766 [2024-11-18 06:54:10.611868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:17.766 [2024-11-18 06:54:10.611874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.766 [2024-11-18 06:54:10.611879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.766 [2024-11-18 06:54:10.611900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.766 [2024-11-18 06:54:10.611910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:17.766 [2024-11-18 06:54:10.611915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.766 [2024-11-18 06:54:10.611922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.766 [2024-11-18 06:54:10.611948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.766 [2024-11-18 06:54:10.611956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:17.766 [2024-11-18 06:54:10.611961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.766 [2024-11-18 06:54:10.611967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.766 [2024-11-18 06:54:10.612008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.766 [2024-11-18 06:54:10.612015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:17.766 [2024-11-18 06:54:10.612024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.766 [2024-11-18 06:54:10.612031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.766 [2024-11-18 06:54:10.612119] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 43.562 ms, result 0 00:20:17.766 00:20:17.766 00:20:17.766 06:54:10 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:20.316 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:20.316 06:54:12 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:20.316 [2024-11-18 06:54:12.971287] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:20:20.316 [2024-11-18 06:54:12.971377] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87454 ] 00:20:20.316 [2024-11-18 06:54:13.119432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.316 [2024-11-18 06:54:13.136351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.316 [2024-11-18 06:54:13.216701] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:20.316 [2024-11-18 06:54:13.216759] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:20.316 [2024-11-18 06:54:13.362921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.316 [2024-11-18 06:54:13.362959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:20.316 [2024-11-18 06:54:13.362968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:20.316 [2024-11-18 06:54:13.362974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.316 [2024-11-18 06:54:13.363018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.316 [2024-11-18 06:54:13.363026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:20.316 [2024-11-18 06:54:13.363032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:20.316 [2024-11-18 06:54:13.363040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.316 [2024-11-18 06:54:13.363055] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:20.316 [2024-11-18 06:54:13.363225] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:20.316 [2024-11-18 06:54:13.363243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.316 [2024-11-18 06:54:13.363251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:20.316 [2024-11-18 06:54:13.363257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:20:20.316 [2024-11-18 06:54:13.363264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.316 [2024-11-18 06:54:13.364154] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:20.316 [2024-11-18 06:54:13.366103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.316 [2024-11-18 06:54:13.366130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:20.316 [2024-11-18 06:54:13.366138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.949 ms 00:20:20.316 [2024-11-18 06:54:13.366147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.316 [2024-11-18 06:54:13.366187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.316 [2024-11-18 06:54:13.366194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:20.316 [2024-11-18 06:54:13.366202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:20.316 [2024-11-18 06:54:13.366208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.316 [2024-11-18 06:54:13.370382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.316 [2024-11-18 06:54:13.370407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:20.316 [2024-11-18 06:54:13.370419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.146 ms 00:20:20.316 [2024-11-18 06:54:13.370424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.316 [2024-11-18 06:54:13.370490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.316 [2024-11-18 06:54:13.370498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:20.316 [2024-11-18 06:54:13.370504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:20.316 [2024-11-18 06:54:13.370510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.316 [2024-11-18 06:54:13.370549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.316 [2024-11-18 06:54:13.370556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:20.316 [2024-11-18 06:54:13.370562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:20.316 [2024-11-18 06:54:13.370567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.316 [2024-11-18 06:54:13.370584] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:20.316 [2024-11-18 06:54:13.371726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.316 [2024-11-18 06:54:13.371751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:20.316 [2024-11-18 06:54:13.371774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.145 ms 00:20:20.316 [2024-11-18 06:54:13.371781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.316 [2024-11-18 06:54:13.371802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.316 [2024-11-18 06:54:13.371813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:20.316 [2024-11-18 06:54:13.371821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:20.316 [2024-11-18 06:54:13.371827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.316 [2024-11-18 06:54:13.371843] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:20.316 [2024-11-18 06:54:13.371857] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:20.316 [2024-11-18 06:54:13.371888] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:20.316 [2024-11-18 06:54:13.371903] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:20.316 [2024-11-18 06:54:13.371996] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:20.316 [2024-11-18 06:54:13.372005] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:20.316 [2024-11-18 06:54:13.372013] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:20.316 [2024-11-18 06:54:13.372025] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:20.316 [2024-11-18 06:54:13.372032] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:20.316 [2024-11-18 06:54:13.372039] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:20.316 [2024-11-18 06:54:13.372047] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:20.316 [2024-11-18 06:54:13.372053] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:20.316 [2024-11-18 06:54:13.372058] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:20.316 [2024-11-18 06:54:13.372064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.316 [2024-11-18 06:54:13.372069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:20.316 [2024-11-18 06:54:13.372075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:20:20.316 [2024-11-18 06:54:13.372080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.316 [2024-11-18 06:54:13.372145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.316 [2024-11-18 06:54:13.372152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:20.316 [2024-11-18 06:54:13.372159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:20.316 [2024-11-18 06:54:13.372164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.316 [2024-11-18 06:54:13.372235] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:20.316 [2024-11-18 06:54:13.372242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:20.316 [2024-11-18 06:54:13.372248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:20.316 [2024-11-18 06:54:13.372253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.316 [2024-11-18 06:54:13.372259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:20.316 [2024-11-18 06:54:13.372269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:20.316 [2024-11-18 06:54:13.372274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:20.316 [2024-11-18 06:54:13.372279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:20.316 [2024-11-18 06:54:13.372284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:20.316 [2024-11-18 06:54:13.372289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:20.316 [2024-11-18 06:54:13.372295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:20.316 [2024-11-18 06:54:13.372299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:20.316 [2024-11-18 06:54:13.372307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:20.316 [2024-11-18 06:54:13.372313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:20.316 [2024-11-18 06:54:13.372318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:20.316 [2024-11-18 06:54:13.372323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.316 [2024-11-18 06:54:13.372328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:20.316 [2024-11-18 06:54:13.372333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:20.316 [2024-11-18 06:54:13.372337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.316 [2024-11-18 06:54:13.372342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:20.316 [2024-11-18 06:54:13.372348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:20.316 [2024-11-18 06:54:13.372353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:20.317 [2024-11-18 06:54:13.372358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:20.317 [2024-11-18 06:54:13.372363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:20.317 [2024-11-18 06:54:13.372368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:20.317 [2024-11-18 06:54:13.372372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:20.317 [2024-11-18 06:54:13.372378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:20.317 [2024-11-18 06:54:13.372384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:20.317 [2024-11-18 06:54:13.372392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:20.317 [2024-11-18 06:54:13.372398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:20.317 [2024-11-18 06:54:13.372404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:20.317 [2024-11-18 06:54:13.372410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:20.317 [2024-11-18 06:54:13.372416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:20.317 [2024-11-18 06:54:13.372421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:20.317 [2024-11-18 06:54:13.372426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:20.317 [2024-11-18 06:54:13.372432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:20.317 [2024-11-18 06:54:13.372437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:20.317 [2024-11-18 06:54:13.372443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:20.317 [2024-11-18 06:54:13.372449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:20.317 [2024-11-18 06:54:13.372454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.317 [2024-11-18 06:54:13.372460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:20.317 [2024-11-18 06:54:13.372465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:20.317 [2024-11-18 06:54:13.372471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.317 [2024-11-18 06:54:13.372477] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:20.317 [2024-11-18 06:54:13.372485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:20.317 [2024-11-18 06:54:13.372494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:20.317 [2024-11-18 06:54:13.372500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.317 [2024-11-18 06:54:13.372506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:20.317 [2024-11-18 06:54:13.372512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:20.317 [2024-11-18 06:54:13.372517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:20.317 [2024-11-18 06:54:13.372523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:20.317 [2024-11-18 06:54:13.372528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:20.317 [2024-11-18 06:54:13.372534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:20.317 [2024-11-18 06:54:13.372541] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:20.317 [2024-11-18 06:54:13.372548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:20.317 [2024-11-18 06:54:13.372555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:20.317 [2024-11-18 06:54:13.372561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:20.317 [2024-11-18 06:54:13.372567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:20.317 [2024-11-18 06:54:13.372573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:20.317 [2024-11-18 06:54:13.372579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:20.317 [2024-11-18 06:54:13.372587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:20.317 [2024-11-18 06:54:13.372593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:20.317 [2024-11-18 06:54:13.372599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:20.317 [2024-11-18 06:54:13.372605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:20.317 [2024-11-18 06:54:13.372611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:20.317 [2024-11-18 06:54:13.372617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:20.317 [2024-11-18 06:54:13.372624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:20.317 [2024-11-18 06:54:13.372631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:20.317 [2024-11-18 06:54:13.372637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:20.317 [2024-11-18 06:54:13.372643] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:20.317 [2024-11-18 06:54:13.372650] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:20.317 [2024-11-18 06:54:13.372659] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:20.317 [2024-11-18 06:54:13.372665] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:20.317 [2024-11-18 06:54:13.372671] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:20.317 [2024-11-18 06:54:13.372678] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:20.317 [2024-11-18 06:54:13.372684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.317 [2024-11-18 06:54:13.372692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:20.317 [2024-11-18 06:54:13.372698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:20:20.317 [2024-11-18 06:54:13.372704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.317 [2024-11-18 06:54:13.380302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.317 [2024-11-18 06:54:13.380328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:20.317 [2024-11-18 06:54:13.380339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.566 ms 00:20:20.317 [2024-11-18 06:54:13.380344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.317 [2024-11-18 06:54:13.380402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.317 [2024-11-18 06:54:13.380412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:20.317 [2024-11-18 06:54:13.380418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:20:20.317 [2024-11-18 06:54:13.380423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.317 [2024-11-18 06:54:13.399029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.317 [2024-11-18 06:54:13.399091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:20.317 [2024-11-18 06:54:13.399113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.574 ms 00:20:20.317 [2024-11-18 06:54:13.399128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.578 [2024-11-18 06:54:13.399213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.578 [2024-11-18 06:54:13.399233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:20.579 [2024-11-18 06:54:13.399248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:20.579 [2024-11-18 06:54:13.399262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.579 [2024-11-18 06:54:13.399707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.579 [2024-11-18 06:54:13.399755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:20.579 [2024-11-18 06:54:13.399774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:20:20.579 [2024-11-18 06:54:13.399790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.579 [2024-11-18 06:54:13.400036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.579 [2024-11-18 06:54:13.400071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:20.579 [2024-11-18 06:54:13.400086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:20:20.579 [2024-11-18 06:54:13.400100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.579 [2024-11-18 06:54:13.406818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.579 [2024-11-18 06:54:13.406875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:20.579 [2024-11-18 06:54:13.406900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.686 ms 00:20:20.579 [2024-11-18 06:54:13.406920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.579 [2024-11-18 06:54:13.409102] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:20.579 [2024-11-18 06:54:13.409135] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:20.579 [2024-11-18 06:54:13.409147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.579 [2024-11-18 06:54:13.409153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:20.579 [2024-11-18 06:54:13.409159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.059 ms 00:20:20.579 [2024-11-18 06:54:13.409165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.579 [2024-11-18 06:54:13.420203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.579 [2024-11-18 06:54:13.420238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:20.579 [2024-11-18 06:54:13.420246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.009 ms 00:20:20.579 [2024-11-18 06:54:13.420253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.579 [2024-11-18 06:54:13.421695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.579 [2024-11-18 06:54:13.421720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:20.579 [2024-11-18 06:54:13.421726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.411 ms 00:20:20.579 [2024-11-18 06:54:13.421732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.579 [2024-11-18 06:54:13.422929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.579 [2024-11-18 06:54:13.422954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:20.579 [2024-11-18 06:54:13.422960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.174 ms 00:20:20.579 [2024-11-18 06:54:13.422965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.579 [2024-11-18 06:54:13.423210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.579 [2024-11-18 06:54:13.423232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:20.579 [2024-11-18 06:54:13.423239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:20:20.579 [2024-11-18 06:54:13.423244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.579 [2024-11-18 06:54:13.436724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.579 [2024-11-18 06:54:13.436759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:20.579 [2024-11-18 06:54:13.436767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.469 ms 00:20:20.579 [2024-11-18 06:54:13.436773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.579 [2024-11-18 06:54:13.442451] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:20.579 [2024-11-18 06:54:13.444261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.579 [2024-11-18 06:54:13.444291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:20.579 [2024-11-18 06:54:13.444298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.460 ms 00:20:20.579 [2024-11-18 06:54:13.444303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.579 [2024-11-18 06:54:13.444341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.579 [2024-11-18 06:54:13.444349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:20.579 [2024-11-18 06:54:13.444355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:20.579 [2024-11-18 06:54:13.444360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.579 [2024-11-18 06:54:13.444410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.579 [2024-11-18 06:54:13.444417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:20.579 [2024-11-18 06:54:13.444423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:20.579 [2024-11-18 06:54:13.444432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.579 [2024-11-18 06:54:13.444446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.579 [2024-11-18 06:54:13.444452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:20.579 [2024-11-18 06:54:13.444458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:20.579 [2024-11-18 06:54:13.444464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.579 [2024-11-18 06:54:13.444488] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:20.579 [2024-11-18 06:54:13.444494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.579 [2024-11-18 06:54:13.444501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:20.579 [2024-11-18 06:54:13.444507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:20.579 [2024-11-18 06:54:13.444512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.579 [2024-11-18 06:54:13.447157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.579 [2024-11-18 06:54:13.447185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:20.579 [2024-11-18 06:54:13.447192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.629 ms 00:20:20.579 [2024-11-18 06:54:13.447198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.579 [2024-11-18 06:54:13.447253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.579 [2024-11-18 06:54:13.447261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:20.579 [2024-11-18 06:54:13.447267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:20.579 [2024-11-18 06:54:13.447272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.579 [2024-11-18 06:54:13.448054] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 84.784 ms, result 0 00:20:21.524  [2024-11-18T06:54:15.644Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-18T06:54:16.586Z] Copying: 37/1024 [MB] (13 MBps) [2024-11-18T06:54:17.528Z] Copying: 57/1024 [MB] (19 MBps) [2024-11-18T06:54:18.472Z] Copying: 73/1024 [MB] (15 MBps) [2024-11-18T06:54:19.858Z] Copying: 90/1024 [MB] (17 MBps) [2024-11-18T06:54:20.804Z] Copying: 105/1024 [MB] (15 MBps) [2024-11-18T06:54:21.748Z] Copying: 118/1024 [MB] (12 MBps) [2024-11-18T06:54:22.692Z] Copying: 131/1024 [MB] (13 MBps) [2024-11-18T06:54:23.636Z] Copying: 147/1024 [MB] (15 MBps) [2024-11-18T06:54:24.580Z] Copying: 162/1024 [MB] (14 MBps) [2024-11-18T06:54:25.524Z] Copying: 174/1024 [MB] (11 MBps) [2024-11-18T06:54:26.466Z] Copying: 191/1024 [MB] (17 MBps) [2024-11-18T06:54:27.849Z] Copying: 202/1024 [MB] (10 MBps) [2024-11-18T06:54:28.793Z] Copying: 212/1024 [MB] (10 MBps) [2024-11-18T06:54:29.735Z] Copying: 222/1024 [MB] (10 MBps) [2024-11-18T06:54:30.679Z] Copying: 234/1024 [MB] (11 MBps) [2024-11-18T06:54:31.625Z] Copying: 251/1024 [MB] (17 MBps) [2024-11-18T06:54:32.569Z] Copying: 262/1024 [MB] (10 MBps) [2024-11-18T06:54:33.515Z] Copying: 297/1024 [MB] (35 MBps) [2024-11-18T06:54:34.459Z] Copying: 343/1024 [MB] (45 MBps) [2024-11-18T06:54:35.846Z] Copying: 354/1024 [MB] (11 MBps) [2024-11-18T06:54:36.791Z] Copying: 365/1024 [MB] (10 MBps) [2024-11-18T06:54:37.735Z] Copying: 381420/1048576 [kB] (7532 kBps) [2024-11-18T06:54:38.677Z] Copying: 382/1024 [MB] (10 MBps) [2024-11-18T06:54:39.622Z] Copying: 397/1024 [MB] (15 MBps) [2024-11-18T06:54:40.568Z] Copying: 408/1024 [MB] (10 MBps) [2024-11-18T06:54:41.512Z] Copying: 428/1024 [MB] (20 MBps) [2024-11-18T06:54:42.900Z] Copying: 442/1024 [MB] (13 MBps) [2024-11-18T06:54:43.529Z] Copying: 453/1024 [MB] (11 MBps) [2024-11-18T06:54:44.470Z] Copying: 471/1024 [MB] (17 MBps) [2024-11-18T06:54:45.855Z] Copying: 483/1024 [MB] (11 MBps) [2024-11-18T06:54:46.798Z] Copying: 493/1024 [MB] (10 MBps) [2024-11-18T06:54:47.741Z] Copying: 504/1024 [MB] (11 MBps) [2024-11-18T06:54:48.685Z] Copying: 539/1024 [MB] (34 MBps) [2024-11-18T06:54:49.630Z] Copying: 563/1024 [MB] (24 MBps) [2024-11-18T06:54:50.574Z] Copying: 579/1024 [MB] (15 MBps) [2024-11-18T06:54:51.518Z] Copying: 612/1024 [MB] (33 MBps) [2024-11-18T06:54:52.464Z] Copying: 633/1024 [MB] (21 MBps) [2024-11-18T06:54:53.852Z] Copying: 653/1024 [MB] (19 MBps) [2024-11-18T06:54:54.796Z] Copying: 671/1024 [MB] (18 MBps) [2024-11-18T06:54:55.742Z] Copying: 687/1024 [MB] (15 MBps) [2024-11-18T06:54:56.686Z] Copying: 703/1024 [MB] (15 MBps) [2024-11-18T06:54:57.629Z] Copying: 718/1024 [MB] (15 MBps) [2024-11-18T06:54:58.573Z] Copying: 737/1024 [MB] (18 MBps) [2024-11-18T06:54:59.518Z] Copying: 755/1024 [MB] (17 MBps) [2024-11-18T06:55:00.463Z] Copying: 775/1024 [MB] (20 MBps) [2024-11-18T06:55:01.851Z] Copying: 806/1024 [MB] (30 MBps) [2024-11-18T06:55:02.796Z] Copying: 823/1024 [MB] (16 MBps) [2024-11-18T06:55:03.741Z] Copying: 839/1024 [MB] (16 MBps) [2024-11-18T06:55:04.685Z] Copying: 860/1024 [MB] (20 MBps) [2024-11-18T06:55:05.631Z] Copying: 877/1024 [MB] (16 MBps) [2024-11-18T06:55:06.574Z] Copying: 891/1024 [MB] (14 MBps) [2024-11-18T06:55:07.518Z] Copying: 924/1024 [MB] (33 MBps) [2024-11-18T06:55:08.463Z] Copying: 948/1024 [MB] (23 MBps) [2024-11-18T06:55:09.851Z] Copying: 958/1024 [MB] (10 MBps) [2024-11-18T06:55:10.796Z] Copying: 977/1024 [MB] (18 MBps) [2024-11-18T06:55:11.739Z] Copying: 997/1024 [MB] (19 MBps) [2024-11-18T06:55:12.003Z] Copying: 1023/1024 [MB] (26 MBps) [2024-11-18T06:55:12.003Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-18 06:55:11.791198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.916 [2024-11-18 06:55:11.791321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:18.916 [2024-11-18 06:55:11.791375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:18.916 [2024-11-18 06:55:11.791394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.916 [2024-11-18 06:55:11.793708] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:18.916 [2024-11-18 06:55:11.794947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.916 [2024-11-18 06:55:11.795055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:18.916 [2024-11-18 06:55:11.795068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.136 ms 00:21:18.916 [2024-11-18 06:55:11.795081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.916 [2024-11-18 06:55:11.804826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.916 [2024-11-18 06:55:11.804857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:18.916 [2024-11-18 06:55:11.804865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.176 ms 00:21:18.916 [2024-11-18 06:55:11.804871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.916 [2024-11-18 06:55:11.821255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.916 [2024-11-18 06:55:11.821285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:18.916 [2024-11-18 06:55:11.821300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.372 ms 00:21:18.916 [2024-11-18 06:55:11.821306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.916 [2024-11-18 06:55:11.826104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.916 [2024-11-18 06:55:11.826131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:18.916 [2024-11-18 06:55:11.826140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.773 ms 00:21:18.916 [2024-11-18 06:55:11.826147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.916 [2024-11-18 06:55:11.827383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.916 [2024-11-18 06:55:11.827414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:18.916 [2024-11-18 06:55:11.827421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.206 ms 00:21:18.916 [2024-11-18 06:55:11.827427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.916 [2024-11-18 06:55:11.830631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.916 [2024-11-18 06:55:11.830660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:18.916 [2024-11-18 06:55:11.830668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.179 ms 00:21:18.916 [2024-11-18 06:55:11.830675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.916 [2024-11-18 06:55:11.893184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.916 [2024-11-18 06:55:11.893225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:18.916 [2024-11-18 06:55:11.893234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.478 ms 00:21:18.916 [2024-11-18 06:55:11.893239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.916 [2024-11-18 06:55:11.894753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.916 [2024-11-18 06:55:11.894781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:18.916 [2024-11-18 06:55:11.894789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.495 ms 00:21:18.916 [2024-11-18 06:55:11.894794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.916 [2024-11-18 06:55:11.896185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.916 [2024-11-18 06:55:11.896212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:18.916 [2024-11-18 06:55:11.896219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.367 ms 00:21:18.916 [2024-11-18 06:55:11.896225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.916 [2024-11-18 06:55:11.897159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.916 [2024-11-18 06:55:11.897185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:18.916 [2024-11-18 06:55:11.897192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.910 ms 00:21:18.916 [2024-11-18 06:55:11.897198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.916 [2024-11-18 06:55:11.898174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.916 [2024-11-18 06:55:11.898201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:18.916 [2024-11-18 06:55:11.898208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.933 ms 00:21:18.916 [2024-11-18 06:55:11.898213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.916 [2024-11-18 06:55:11.898236] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:18.916 [2024-11-18 06:55:11.898246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 116480 / 261120 wr_cnt: 1 state: open 00:21:18.916 [2024-11-18 06:55:11.898255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:18.916 [2024-11-18 06:55:11.898262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:18.916 [2024-11-18 06:55:11.898268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:18.916 [2024-11-18 06:55:11.898274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:18.916 [2024-11-18 06:55:11.898280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:18.916 [2024-11-18 06:55:11.898286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:18.916 [2024-11-18 06:55:11.898292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:18.916 [2024-11-18 06:55:11.898298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:18.916 [2024-11-18 06:55:11.898304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:18.916 [2024-11-18 06:55:11.898315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:18.916 [2024-11-18 06:55:11.898321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:18.916 [2024-11-18 06:55:11.898328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:18.916 [2024-11-18 06:55:11.898334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:18.916 [2024-11-18 06:55:11.898340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:18.916 [2024-11-18 06:55:11.898346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:18.916 [2024-11-18 06:55:11.898352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:18.916 [2024-11-18 06:55:11.898359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:18.916 [2024-11-18 06:55:11.898364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:18.916 [2024-11-18 06:55:11.898371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:18.916 [2024-11-18 06:55:11.898377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:18.917 [2024-11-18 06:55:11.898874] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:18.917 [2024-11-18 06:55:11.898880] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0cc73ce1-3fae-4d36-91d2-6119a87f6d65 00:21:18.917 [2024-11-18 06:55:11.898887] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 116480 00:21:18.917 [2024-11-18 06:55:11.898893] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 117440 00:21:18.917 [2024-11-18 06:55:11.898903] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 116480 00:21:18.917 [2024-11-18 06:55:11.898910] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0082 00:21:18.917 [2024-11-18 06:55:11.898916] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:18.917 [2024-11-18 06:55:11.898925] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:18.918 [2024-11-18 06:55:11.898931] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:18.918 [2024-11-18 06:55:11.898936] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:18.918 [2024-11-18 06:55:11.898941] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:18.918 [2024-11-18 06:55:11.898952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.918 [2024-11-18 06:55:11.898958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:18.918 [2024-11-18 06:55:11.898964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.717 ms 00:21:18.918 [2024-11-18 06:55:11.898970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.918 [2024-11-18 06:55:11.900241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.918 [2024-11-18 06:55:11.900262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:18.918 [2024-11-18 06:55:11.900270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.250 ms 00:21:18.918 [2024-11-18 06:55:11.900276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.918 [2024-11-18 06:55:11.900349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.918 [2024-11-18 06:55:11.900356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:18.918 [2024-11-18 06:55:11.900363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:21:18.918 [2024-11-18 06:55:11.900369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.918 [2024-11-18 06:55:11.904508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.918 [2024-11-18 06:55:11.904531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:18.918 [2024-11-18 06:55:11.904538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.918 [2024-11-18 06:55:11.904544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.918 [2024-11-18 06:55:11.904586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.918 [2024-11-18 06:55:11.904596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:18.918 [2024-11-18 06:55:11.904602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.918 [2024-11-18 06:55:11.904608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.918 [2024-11-18 06:55:11.904637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.918 [2024-11-18 06:55:11.904644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:18.918 [2024-11-18 06:55:11.904649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.918 [2024-11-18 06:55:11.904655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.918 [2024-11-18 06:55:11.904666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.918 [2024-11-18 06:55:11.904672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:18.918 [2024-11-18 06:55:11.904678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.918 [2024-11-18 06:55:11.904687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.918 [2024-11-18 06:55:11.912249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.918 [2024-11-18 06:55:11.912283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:18.918 [2024-11-18 06:55:11.912291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.918 [2024-11-18 06:55:11.912298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.918 [2024-11-18 06:55:11.918417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.918 [2024-11-18 06:55:11.918451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:18.918 [2024-11-18 06:55:11.918458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.918 [2024-11-18 06:55:11.918471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.918 [2024-11-18 06:55:11.918503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.918 [2024-11-18 06:55:11.918513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:18.918 [2024-11-18 06:55:11.918519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.918 [2024-11-18 06:55:11.918524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.918 [2024-11-18 06:55:11.918542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.918 [2024-11-18 06:55:11.918548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:18.918 [2024-11-18 06:55:11.918554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.918 [2024-11-18 06:55:11.918559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.918 [2024-11-18 06:55:11.918607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.918 [2024-11-18 06:55:11.918618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:18.918 [2024-11-18 06:55:11.918627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.918 [2024-11-18 06:55:11.918633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.918 [2024-11-18 06:55:11.918653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.918 [2024-11-18 06:55:11.918660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:18.918 [2024-11-18 06:55:11.918666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.918 [2024-11-18 06:55:11.918672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.918 [2024-11-18 06:55:11.918704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.918 [2024-11-18 06:55:11.918711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:18.918 [2024-11-18 06:55:11.918718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.918 [2024-11-18 06:55:11.918734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.918 [2024-11-18 06:55:11.918768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.918 [2024-11-18 06:55:11.918779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:18.918 [2024-11-18 06:55:11.918785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.918 [2024-11-18 06:55:11.918791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.918 [2024-11-18 06:55:11.918880] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 130.105 ms, result 0 00:21:19.889 00:21:19.889 00:21:19.889 06:55:12 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:19.889 [2024-11-18 06:55:12.781308] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:21:19.889 [2024-11-18 06:55:12.781444] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88075 ] 00:21:19.889 [2024-11-18 06:55:12.935771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.889 [2024-11-18 06:55:12.958944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.151 [2024-11-18 06:55:13.044302] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:20.151 [2024-11-18 06:55:13.044359] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:20.151 [2024-11-18 06:55:13.190919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.151 [2024-11-18 06:55:13.190957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:20.151 [2024-11-18 06:55:13.190970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:20.151 [2024-11-18 06:55:13.190988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.151 [2024-11-18 06:55:13.191027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.151 [2024-11-18 06:55:13.191035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:20.151 [2024-11-18 06:55:13.191041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:21:20.151 [2024-11-18 06:55:13.191046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.151 [2024-11-18 06:55:13.191061] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:20.151 [2024-11-18 06:55:13.191324] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:20.151 [2024-11-18 06:55:13.191364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.151 [2024-11-18 06:55:13.191370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:20.151 [2024-11-18 06:55:13.191376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:21:20.151 [2024-11-18 06:55:13.191384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.151 [2024-11-18 06:55:13.192416] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:20.151 [2024-11-18 06:55:13.194279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.151 [2024-11-18 06:55:13.194307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:20.151 [2024-11-18 06:55:13.194315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.868 ms 00:21:20.151 [2024-11-18 06:55:13.194325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.151 [2024-11-18 06:55:13.194365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.151 [2024-11-18 06:55:13.194373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:20.151 [2024-11-18 06:55:13.194379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:20.151 [2024-11-18 06:55:13.194384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.151 [2024-11-18 06:55:13.198664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.151 [2024-11-18 06:55:13.198695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:20.151 [2024-11-18 06:55:13.198704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.246 ms 00:21:20.151 [2024-11-18 06:55:13.198710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.151 [2024-11-18 06:55:13.198786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.151 [2024-11-18 06:55:13.198796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:20.151 [2024-11-18 06:55:13.198802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:21:20.151 [2024-11-18 06:55:13.198807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.151 [2024-11-18 06:55:13.198842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.151 [2024-11-18 06:55:13.198848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:20.151 [2024-11-18 06:55:13.198854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:20.151 [2024-11-18 06:55:13.198860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.151 [2024-11-18 06:55:13.198877] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:20.151 [2024-11-18 06:55:13.200032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.151 [2024-11-18 06:55:13.200056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:20.151 [2024-11-18 06:55:13.200063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.157 ms 00:21:20.151 [2024-11-18 06:55:13.200068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.151 [2024-11-18 06:55:13.200093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.151 [2024-11-18 06:55:13.200100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:20.151 [2024-11-18 06:55:13.200106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:20.151 [2024-11-18 06:55:13.200111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.151 [2024-11-18 06:55:13.200130] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:20.151 [2024-11-18 06:55:13.200143] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:20.151 [2024-11-18 06:55:13.200171] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:20.151 [2024-11-18 06:55:13.200182] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:20.151 [2024-11-18 06:55:13.200260] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:20.151 [2024-11-18 06:55:13.200268] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:20.152 [2024-11-18 06:55:13.200276] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:20.152 [2024-11-18 06:55:13.200286] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:20.152 [2024-11-18 06:55:13.200300] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:20.152 [2024-11-18 06:55:13.200307] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:20.152 [2024-11-18 06:55:13.200315] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:20.152 [2024-11-18 06:55:13.200322] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:20.152 [2024-11-18 06:55:13.200332] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:20.152 [2024-11-18 06:55:13.200338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.152 [2024-11-18 06:55:13.200346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:20.152 [2024-11-18 06:55:13.200352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:21:20.152 [2024-11-18 06:55:13.200357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.152 [2024-11-18 06:55:13.200420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.152 [2024-11-18 06:55:13.200427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:20.152 [2024-11-18 06:55:13.200433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:20.152 [2024-11-18 06:55:13.200438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.152 [2024-11-18 06:55:13.200512] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:20.152 [2024-11-18 06:55:13.200528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:20.152 [2024-11-18 06:55:13.200537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:20.152 [2024-11-18 06:55:13.200543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.152 [2024-11-18 06:55:13.200548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:20.152 [2024-11-18 06:55:13.200558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:20.152 [2024-11-18 06:55:13.200563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:20.152 [2024-11-18 06:55:13.200569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:20.152 [2024-11-18 06:55:13.200574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:20.152 [2024-11-18 06:55:13.200579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:20.152 [2024-11-18 06:55:13.200584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:20.152 [2024-11-18 06:55:13.200589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:20.152 [2024-11-18 06:55:13.200594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:20.152 [2024-11-18 06:55:13.200600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:20.152 [2024-11-18 06:55:13.200605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:20.152 [2024-11-18 06:55:13.200610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.152 [2024-11-18 06:55:13.200615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:20.152 [2024-11-18 06:55:13.200620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:20.152 [2024-11-18 06:55:13.200626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.152 [2024-11-18 06:55:13.200631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:20.152 [2024-11-18 06:55:13.200636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:20.152 [2024-11-18 06:55:13.200642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:20.152 [2024-11-18 06:55:13.200647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:20.152 [2024-11-18 06:55:13.200652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:20.152 [2024-11-18 06:55:13.200656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:20.152 [2024-11-18 06:55:13.200661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:20.152 [2024-11-18 06:55:13.200666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:20.152 [2024-11-18 06:55:13.200671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:20.152 [2024-11-18 06:55:13.200676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:20.152 [2024-11-18 06:55:13.200680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:20.152 [2024-11-18 06:55:13.200685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:20.152 [2024-11-18 06:55:13.200691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:20.152 [2024-11-18 06:55:13.200696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:20.152 [2024-11-18 06:55:13.200701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:20.152 [2024-11-18 06:55:13.200708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:20.152 [2024-11-18 06:55:13.200713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:20.152 [2024-11-18 06:55:13.200717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:20.152 [2024-11-18 06:55:13.200722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:20.152 [2024-11-18 06:55:13.200728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:20.152 [2024-11-18 06:55:13.200732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.152 [2024-11-18 06:55:13.200737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:20.152 [2024-11-18 06:55:13.200742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:20.152 [2024-11-18 06:55:13.200747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.152 [2024-11-18 06:55:13.200751] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:20.152 [2024-11-18 06:55:13.200757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:20.152 [2024-11-18 06:55:13.200765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:20.152 [2024-11-18 06:55:13.200771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.152 [2024-11-18 06:55:13.200776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:20.152 [2024-11-18 06:55:13.200781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:20.152 [2024-11-18 06:55:13.200786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:20.152 [2024-11-18 06:55:13.200793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:20.152 [2024-11-18 06:55:13.200798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:20.152 [2024-11-18 06:55:13.200803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:20.152 [2024-11-18 06:55:13.200809] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:20.152 [2024-11-18 06:55:13.200815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:20.152 [2024-11-18 06:55:13.200821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:20.152 [2024-11-18 06:55:13.200828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:20.152 [2024-11-18 06:55:13.200833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:20.152 [2024-11-18 06:55:13.200838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:20.152 [2024-11-18 06:55:13.200843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:20.152 [2024-11-18 06:55:13.200848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:20.152 [2024-11-18 06:55:13.200854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:20.152 [2024-11-18 06:55:13.200859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:20.152 [2024-11-18 06:55:13.200864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:20.152 [2024-11-18 06:55:13.200869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:20.152 [2024-11-18 06:55:13.200874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:20.153 [2024-11-18 06:55:13.200882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:20.153 [2024-11-18 06:55:13.200887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:20.153 [2024-11-18 06:55:13.200892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:20.153 [2024-11-18 06:55:13.200897] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:20.153 [2024-11-18 06:55:13.200903] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:20.153 [2024-11-18 06:55:13.200909] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:20.153 [2024-11-18 06:55:13.200914] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:20.153 [2024-11-18 06:55:13.200920] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:20.153 [2024-11-18 06:55:13.200926] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:20.153 [2024-11-18 06:55:13.200932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.153 [2024-11-18 06:55:13.200937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:20.153 [2024-11-18 06:55:13.200943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:21:20.153 [2024-11-18 06:55:13.200951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.153 [2024-11-18 06:55:13.208743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.153 [2024-11-18 06:55:13.208776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:20.153 [2024-11-18 06:55:13.208786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.746 ms 00:21:20.153 [2024-11-18 06:55:13.208793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.153 [2024-11-18 06:55:13.208855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.153 [2024-11-18 06:55:13.208861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:20.153 [2024-11-18 06:55:13.208870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:21:20.153 [2024-11-18 06:55:13.208875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.153 [2024-11-18 06:55:13.228608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.153 [2024-11-18 06:55:13.228670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:20.153 [2024-11-18 06:55:13.228691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.691 ms 00:21:20.153 [2024-11-18 06:55:13.228712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.153 [2024-11-18 06:55:13.228780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.153 [2024-11-18 06:55:13.228797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:20.153 [2024-11-18 06:55:13.228812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:20.153 [2024-11-18 06:55:13.228825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.153 [2024-11-18 06:55:13.229297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.153 [2024-11-18 06:55:13.229343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:20.153 [2024-11-18 06:55:13.229361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:21:20.153 [2024-11-18 06:55:13.229376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.153 [2024-11-18 06:55:13.229591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.153 [2024-11-18 06:55:13.229622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:20.153 [2024-11-18 06:55:13.229643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:21:20.153 [2024-11-18 06:55:13.229656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.415 [2024-11-18 06:55:13.235898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.415 [2024-11-18 06:55:13.235929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:20.415 [2024-11-18 06:55:13.235941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.213 ms 00:21:20.415 [2024-11-18 06:55:13.235947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.415 [2024-11-18 06:55:13.237874] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:20.415 [2024-11-18 06:55:13.237904] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:20.415 [2024-11-18 06:55:13.237913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.415 [2024-11-18 06:55:13.237920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:20.415 [2024-11-18 06:55:13.237926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.896 ms 00:21:20.415 [2024-11-18 06:55:13.237931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.415 [2024-11-18 06:55:13.249034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.415 [2024-11-18 06:55:13.249151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:20.415 [2024-11-18 06:55:13.249164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.073 ms 00:21:20.415 [2024-11-18 06:55:13.249170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.415 [2024-11-18 06:55:13.250527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.415 [2024-11-18 06:55:13.250556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:20.415 [2024-11-18 06:55:13.250564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.333 ms 00:21:20.415 [2024-11-18 06:55:13.250570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.415 [2024-11-18 06:55:13.251820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.415 [2024-11-18 06:55:13.251846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:20.415 [2024-11-18 06:55:13.251853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.225 ms 00:21:20.415 [2024-11-18 06:55:13.251858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.415 [2024-11-18 06:55:13.252112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.415 [2024-11-18 06:55:13.252121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:20.415 [2024-11-18 06:55:13.252131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:21:20.415 [2024-11-18 06:55:13.252136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.415 [2024-11-18 06:55:13.265870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.415 [2024-11-18 06:55:13.265910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:20.415 [2024-11-18 06:55:13.265919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.722 ms 00:21:20.415 [2024-11-18 06:55:13.265925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.415 [2024-11-18 06:55:13.271682] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:20.415 [2024-11-18 06:55:13.273516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.415 [2024-11-18 06:55:13.273543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:20.415 [2024-11-18 06:55:13.273551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.561 ms 00:21:20.415 [2024-11-18 06:55:13.273558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.415 [2024-11-18 06:55:13.273598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.415 [2024-11-18 06:55:13.273606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:20.415 [2024-11-18 06:55:13.273613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:20.415 [2024-11-18 06:55:13.273618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.415 [2024-11-18 06:55:13.274802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.415 [2024-11-18 06:55:13.274829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:20.415 [2024-11-18 06:55:13.274835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.155 ms 00:21:20.415 [2024-11-18 06:55:13.274845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.415 [2024-11-18 06:55:13.274863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.415 [2024-11-18 06:55:13.274869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:20.415 [2024-11-18 06:55:13.274876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:20.415 [2024-11-18 06:55:13.274881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.415 [2024-11-18 06:55:13.274904] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:20.415 [2024-11-18 06:55:13.274911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.415 [2024-11-18 06:55:13.274917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:20.415 [2024-11-18 06:55:13.274923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:20.415 [2024-11-18 06:55:13.274929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.415 [2024-11-18 06:55:13.277747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.415 [2024-11-18 06:55:13.277782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:20.415 [2024-11-18 06:55:13.277794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.805 ms 00:21:20.415 [2024-11-18 06:55:13.277801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.415 [2024-11-18 06:55:13.277854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.415 [2024-11-18 06:55:13.277865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:20.415 [2024-11-18 06:55:13.277872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:20.415 [2024-11-18 06:55:13.277878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.415 [2024-11-18 06:55:13.278601] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 87.382 ms, result 0 00:21:21.365  [2024-11-18T06:55:15.838Z] Copying: 30/1024 [MB] (30 MBps) [2024-11-18T06:55:16.783Z] Copying: 48/1024 [MB] (18 MBps) [2024-11-18T06:55:17.725Z] Copying: 66/1024 [MB] (17 MBps) [2024-11-18T06:55:18.668Z] Copying: 80/1024 [MB] (13 MBps) [2024-11-18T06:55:19.613Z] Copying: 98/1024 [MB] (18 MBps) [2024-11-18T06:55:20.556Z] Copying: 113/1024 [MB] (14 MBps) [2024-11-18T06:55:21.500Z] Copying: 131/1024 [MB] (18 MBps) [2024-11-18T06:55:22.444Z] Copying: 147/1024 [MB] (16 MBps) [2024-11-18T06:55:23.832Z] Copying: 166/1024 [MB] (19 MBps) [2024-11-18T06:55:24.776Z] Copying: 185/1024 [MB] (18 MBps) [2024-11-18T06:55:25.721Z] Copying: 201/1024 [MB] (16 MBps) [2024-11-18T06:55:26.665Z] Copying: 212/1024 [MB] (11 MBps) [2024-11-18T06:55:27.609Z] Copying: 223/1024 [MB] (10 MBps) [2024-11-18T06:55:28.554Z] Copying: 234/1024 [MB] (10 MBps) [2024-11-18T06:55:29.499Z] Copying: 247/1024 [MB] (13 MBps) [2024-11-18T06:55:30.443Z] Copying: 266/1024 [MB] (19 MBps) [2024-11-18T06:55:31.831Z] Copying: 276/1024 [MB] (10 MBps) [2024-11-18T06:55:32.776Z] Copying: 287/1024 [MB] (10 MBps) [2024-11-18T06:55:33.722Z] Copying: 308/1024 [MB] (20 MBps) [2024-11-18T06:55:34.666Z] Copying: 321/1024 [MB] (13 MBps) [2024-11-18T06:55:35.611Z] Copying: 340/1024 [MB] (19 MBps) [2024-11-18T06:55:36.556Z] Copying: 359/1024 [MB] (18 MBps) [2024-11-18T06:55:37.501Z] Copying: 376/1024 [MB] (16 MBps) [2024-11-18T06:55:38.452Z] Copying: 394/1024 [MB] (18 MBps) [2024-11-18T06:55:39.838Z] Copying: 411/1024 [MB] (17 MBps) [2024-11-18T06:55:40.787Z] Copying: 421/1024 [MB] (10 MBps) [2024-11-18T06:55:41.767Z] Copying: 432/1024 [MB] (10 MBps) [2024-11-18T06:55:42.709Z] Copying: 443/1024 [MB] (10 MBps) [2024-11-18T06:55:43.651Z] Copying: 466/1024 [MB] (23 MBps) [2024-11-18T06:55:44.592Z] Copying: 484/1024 [MB] (17 MBps) [2024-11-18T06:55:45.536Z] Copying: 504/1024 [MB] (20 MBps) [2024-11-18T06:55:46.479Z] Copying: 527/1024 [MB] (23 MBps) [2024-11-18T06:55:47.422Z] Copying: 550/1024 [MB] (22 MBps) [2024-11-18T06:55:48.815Z] Copying: 564/1024 [MB] (14 MBps) [2024-11-18T06:55:49.769Z] Copying: 597/1024 [MB] (32 MBps) [2024-11-18T06:55:50.707Z] Copying: 617/1024 [MB] (20 MBps) [2024-11-18T06:55:51.647Z] Copying: 638/1024 [MB] (20 MBps) [2024-11-18T06:55:52.586Z] Copying: 653/1024 [MB] (14 MBps) [2024-11-18T06:55:53.524Z] Copying: 675/1024 [MB] (22 MBps) [2024-11-18T06:55:54.465Z] Copying: 701/1024 [MB] (25 MBps) [2024-11-18T06:55:55.847Z] Copying: 714/1024 [MB] (13 MBps) [2024-11-18T06:55:56.418Z] Copying: 725/1024 [MB] (10 MBps) [2024-11-18T06:55:57.794Z] Copying: 735/1024 [MB] (10 MBps) [2024-11-18T06:55:58.733Z] Copying: 749/1024 [MB] (14 MBps) [2024-11-18T06:55:59.671Z] Copying: 760/1024 [MB] (10 MBps) [2024-11-18T06:56:00.608Z] Copying: 770/1024 [MB] (10 MBps) [2024-11-18T06:56:01.546Z] Copying: 781/1024 [MB] (10 MBps) [2024-11-18T06:56:02.481Z] Copying: 792/1024 [MB] (11 MBps) [2024-11-18T06:56:03.419Z] Copying: 804/1024 [MB] (12 MBps) [2024-11-18T06:56:04.800Z] Copying: 816/1024 [MB] (11 MBps) [2024-11-18T06:56:05.737Z] Copying: 826/1024 [MB] (10 MBps) [2024-11-18T06:56:06.671Z] Copying: 838/1024 [MB] (11 MBps) [2024-11-18T06:56:07.610Z] Copying: 849/1024 [MB] (11 MBps) [2024-11-18T06:56:08.545Z] Copying: 861/1024 [MB] (11 MBps) [2024-11-18T06:56:09.486Z] Copying: 873/1024 [MB] (12 MBps) [2024-11-18T06:56:10.489Z] Copying: 884/1024 [MB] (11 MBps) [2024-11-18T06:56:11.423Z] Copying: 895/1024 [MB] (10 MBps) [2024-11-18T06:56:12.799Z] Copying: 907/1024 [MB] (11 MBps) [2024-11-18T06:56:13.734Z] Copying: 919/1024 [MB] (12 MBps) [2024-11-18T06:56:14.670Z] Copying: 931/1024 [MB] (12 MBps) [2024-11-18T06:56:15.610Z] Copying: 943/1024 [MB] (11 MBps) [2024-11-18T06:56:16.549Z] Copying: 954/1024 [MB] (11 MBps) [2024-11-18T06:56:17.484Z] Copying: 965/1024 [MB] (10 MBps) [2024-11-18T06:56:18.420Z] Copying: 978/1024 [MB] (12 MBps) [2024-11-18T06:56:19.804Z] Copying: 990/1024 [MB] (12 MBps) [2024-11-18T06:56:20.744Z] Copying: 1001/1024 [MB] (10 MBps) [2024-11-18T06:56:21.685Z] Copying: 1011/1024 [MB] (10 MBps) [2024-11-18T06:56:21.685Z] Copying: 1022/1024 [MB] (10 MBps) [2024-11-18T06:56:21.685Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-18 06:56:21.562835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.598 [2024-11-18 06:56:21.563003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:28.598 [2024-11-18 06:56:21.563024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:28.598 [2024-11-18 06:56:21.563033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.598 [2024-11-18 06:56:21.563058] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:28.598 [2024-11-18 06:56:21.563553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.598 [2024-11-18 06:56:21.563570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:28.598 [2024-11-18 06:56:21.563579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.480 ms 00:22:28.598 [2024-11-18 06:56:21.563593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.598 [2024-11-18 06:56:21.563796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.598 [2024-11-18 06:56:21.563806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:28.598 [2024-11-18 06:56:21.563815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:22:28.598 [2024-11-18 06:56:21.563824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.598 [2024-11-18 06:56:21.568719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.598 [2024-11-18 06:56:21.568753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:28.598 [2024-11-18 06:56:21.568764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.879 ms 00:22:28.599 [2024-11-18 06:56:21.568771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.599 [2024-11-18 06:56:21.575247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.599 [2024-11-18 06:56:21.575275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:28.599 [2024-11-18 06:56:21.575285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.438 ms 00:22:28.599 [2024-11-18 06:56:21.575293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.599 [2024-11-18 06:56:21.577635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.599 [2024-11-18 06:56:21.577670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:28.599 [2024-11-18 06:56:21.577679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.301 ms 00:22:28.599 [2024-11-18 06:56:21.577686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.599 [2024-11-18 06:56:21.582170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.599 [2024-11-18 06:56:21.582206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:28.599 [2024-11-18 06:56:21.582216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.453 ms 00:22:28.599 [2024-11-18 06:56:21.582223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.861 [2024-11-18 06:56:21.928064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.861 [2024-11-18 06:56:21.928128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:28.861 [2024-11-18 06:56:21.928142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 345.797 ms 00:22:28.861 [2024-11-18 06:56:21.928150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.861 [2024-11-18 06:56:21.931672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.861 [2024-11-18 06:56:21.931739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:28.861 [2024-11-18 06:56:21.931750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.505 ms 00:22:28.861 [2024-11-18 06:56:21.931757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.861 [2024-11-18 06:56:21.934650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.861 [2024-11-18 06:56:21.934698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:28.861 [2024-11-18 06:56:21.934732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.849 ms 00:22:28.861 [2024-11-18 06:56:21.934740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.861 [2024-11-18 06:56:21.936970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.861 [2024-11-18 06:56:21.937028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:28.861 [2024-11-18 06:56:21.937039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.187 ms 00:22:28.861 [2024-11-18 06:56:21.937046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.861 [2024-11-18 06:56:21.939287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.861 [2024-11-18 06:56:21.939337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:28.861 [2024-11-18 06:56:21.939346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.170 ms 00:22:28.861 [2024-11-18 06:56:21.939354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.861 [2024-11-18 06:56:21.939394] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:28.861 [2024-11-18 06:56:21.939408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:22:28.861 [2024-11-18 06:56:21.939419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:28.861 [2024-11-18 06:56:21.939428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:28.861 [2024-11-18 06:56:21.939436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:28.861 [2024-11-18 06:56:21.939443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:28.861 [2024-11-18 06:56:21.939451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:28.861 [2024-11-18 06:56:21.939458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:28.861 [2024-11-18 06:56:21.939466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:28.861 [2024-11-18 06:56:21.939473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:28.861 [2024-11-18 06:56:21.939482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:28.861 [2024-11-18 06:56:21.939490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:28.861 [2024-11-18 06:56:21.939498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:28.861 [2024-11-18 06:56:21.939505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:28.861 [2024-11-18 06:56:21.939513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:28.861 [2024-11-18 06:56:21.939521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:28.861 [2024-11-18 06:56:21.939528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:28.861 [2024-11-18 06:56:21.939536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:28.861 [2024-11-18 06:56:21.939543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:28.861 [2024-11-18 06:56:21.939551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:28.861 [2024-11-18 06:56:21.939558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:28.861 [2024-11-18 06:56:21.939565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.939972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.940005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.940013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.940020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.940028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.940036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.940043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.940050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.940058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.940065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.940073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.940082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.940089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.940097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.940104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.940112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.940119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.940126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.940134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.940141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.940150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.940158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.940166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.940174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.940182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.940190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.940197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:28.862 [2024-11-18 06:56:21.940213] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:28.862 [2024-11-18 06:56:21.940221] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0cc73ce1-3fae-4d36-91d2-6119a87f6d65 00:22:28.862 [2024-11-18 06:56:21.940230] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:22:28.862 [2024-11-18 06:56:21.940238] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 15552 00:22:28.862 [2024-11-18 06:56:21.940256] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 14592 00:22:28.862 [2024-11-18 06:56:21.940264] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0658 00:22:28.862 [2024-11-18 06:56:21.940272] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:28.862 [2024-11-18 06:56:21.940279] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:28.862 [2024-11-18 06:56:21.940287] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:28.862 [2024-11-18 06:56:21.940293] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:28.862 [2024-11-18 06:56:21.940300] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:28.862 [2024-11-18 06:56:21.940308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.863 [2024-11-18 06:56:21.940322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:28.863 [2024-11-18 06:56:21.940331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.914 ms 00:22:28.863 [2024-11-18 06:56:21.940338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.863 [2024-11-18 06:56:21.942992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.863 [2024-11-18 06:56:21.943145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:28.863 [2024-11-18 06:56:21.943201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.616 ms 00:22:28.863 [2024-11-18 06:56:21.943225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.863 [2024-11-18 06:56:21.943374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.863 [2024-11-18 06:56:21.943464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:28.863 [2024-11-18 06:56:21.943489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:22:28.863 [2024-11-18 06:56:21.943509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.125 [2024-11-18 06:56:21.951168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.125 [2024-11-18 06:56:21.951339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:29.125 [2024-11-18 06:56:21.951395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.125 [2024-11-18 06:56:21.951417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.125 [2024-11-18 06:56:21.951492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.125 [2024-11-18 06:56:21.951516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:29.125 [2024-11-18 06:56:21.951536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.125 [2024-11-18 06:56:21.951555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.125 [2024-11-18 06:56:21.951643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.125 [2024-11-18 06:56:21.951751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:29.125 [2024-11-18 06:56:21.951771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.125 [2024-11-18 06:56:21.951791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.125 [2024-11-18 06:56:21.951817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.125 [2024-11-18 06:56:21.951837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:29.125 [2024-11-18 06:56:21.951856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.125 [2024-11-18 06:56:21.951948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.125 [2024-11-18 06:56:21.965554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.125 [2024-11-18 06:56:21.965752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:29.125 [2024-11-18 06:56:21.965818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.125 [2024-11-18 06:56:21.965841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.125 [2024-11-18 06:56:21.975923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.125 [2024-11-18 06:56:21.976122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:29.125 [2024-11-18 06:56:21.976177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.125 [2024-11-18 06:56:21.976200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.125 [2024-11-18 06:56:21.976261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.125 [2024-11-18 06:56:21.976292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:29.125 [2024-11-18 06:56:21.976320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.125 [2024-11-18 06:56:21.976339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.125 [2024-11-18 06:56:21.976385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.125 [2024-11-18 06:56:21.976406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:29.125 [2024-11-18 06:56:21.976427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.125 [2024-11-18 06:56:21.976481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.125 [2024-11-18 06:56:21.976575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.125 [2024-11-18 06:56:21.976600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:29.125 [2024-11-18 06:56:21.976710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.125 [2024-11-18 06:56:21.977033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.125 [2024-11-18 06:56:21.977091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.125 [2024-11-18 06:56:21.977102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:29.125 [2024-11-18 06:56:21.977111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.125 [2024-11-18 06:56:21.977119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.125 [2024-11-18 06:56:21.977165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.125 [2024-11-18 06:56:21.977174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:29.125 [2024-11-18 06:56:21.977186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.125 [2024-11-18 06:56:21.977194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.125 [2024-11-18 06:56:21.977239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.125 [2024-11-18 06:56:21.977250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:29.125 [2024-11-18 06:56:21.977259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.125 [2024-11-18 06:56:21.977268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.125 [2024-11-18 06:56:21.977402] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 414.523 ms, result 0 00:22:29.125 00:22:29.125 00:22:29.125 06:56:22 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:31.669 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:31.669 06:56:24 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:31.669 06:56:24 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:22:31.669 06:56:24 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:31.669 06:56:24 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:31.669 06:56:24 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:31.670 06:56:24 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 85786 00:22:31.670 06:56:24 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 85786 ']' 00:22:31.670 Process with pid 85786 is not found 00:22:31.670 06:56:24 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 85786 00:22:31.670 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (85786) - No such process 00:22:31.670 06:56:24 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 85786 is not found' 00:22:31.670 06:56:24 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:22:31.670 06:56:24 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:31.670 Remove shared memory files 00:22:31.670 06:56:24 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:22:31.670 06:56:24 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:22:31.670 06:56:24 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:22:31.670 06:56:24 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:31.670 06:56:24 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:22:31.670 ************************************ 00:22:31.670 END TEST ftl_restore 00:22:31.670 ************************************ 00:22:31.670 00:22:31.670 real 4m50.900s 00:22:31.670 user 4m39.546s 00:22:31.670 sys 0m11.272s 00:22:31.670 06:56:24 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:31.670 06:56:24 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:31.670 06:56:24 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:31.670 06:56:24 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:31.670 06:56:24 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:31.670 06:56:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:31.670 ************************************ 00:22:31.670 START TEST ftl_dirty_shutdown 00:22:31.670 ************************************ 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:31.670 * Looking for test storage... 00:22:31.670 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:31.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.670 --rc genhtml_branch_coverage=1 00:22:31.670 --rc genhtml_function_coverage=1 00:22:31.670 --rc genhtml_legend=1 00:22:31.670 --rc geninfo_all_blocks=1 00:22:31.670 --rc geninfo_unexecuted_blocks=1 00:22:31.670 00:22:31.670 ' 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:31.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.670 --rc genhtml_branch_coverage=1 00:22:31.670 --rc genhtml_function_coverage=1 00:22:31.670 --rc genhtml_legend=1 00:22:31.670 --rc geninfo_all_blocks=1 00:22:31.670 --rc geninfo_unexecuted_blocks=1 00:22:31.670 00:22:31.670 ' 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:31.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.670 --rc genhtml_branch_coverage=1 00:22:31.670 --rc genhtml_function_coverage=1 00:22:31.670 --rc genhtml_legend=1 00:22:31.670 --rc geninfo_all_blocks=1 00:22:31.670 --rc geninfo_unexecuted_blocks=1 00:22:31.670 00:22:31.670 ' 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:31.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.670 --rc genhtml_branch_coverage=1 00:22:31.670 --rc genhtml_function_coverage=1 00:22:31.670 --rc genhtml_legend=1 00:22:31.670 --rc geninfo_all_blocks=1 00:22:31.670 --rc geninfo_unexecuted_blocks=1 00:22:31.670 00:22:31.670 ' 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:31.670 06:56:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:31.671 06:56:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:31.671 06:56:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:31.671 06:56:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:31.671 06:56:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=88879 00:22:31.671 06:56:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 88879 00:22:31.671 06:56:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 88879 ']' 00:22:31.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.671 06:56:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.671 06:56:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:31.671 06:56:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:31.671 06:56:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.671 06:56:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:31.671 06:56:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:31.671 [2024-11-18 06:56:24.707935] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:22:31.671 [2024-11-18 06:56:24.708191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88879 ] 00:22:31.932 [2024-11-18 06:56:24.861108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.932 [2024-11-18 06:56:24.881896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.503 06:56:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:32.503 06:56:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:22:32.503 06:56:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:32.503 06:56:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:22:32.503 06:56:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:32.503 06:56:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:22:32.503 06:56:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:22:32.503 06:56:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:32.763 06:56:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:32.763 06:56:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:22:32.763 06:56:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:32.763 06:56:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:32.763 06:56:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:33.023 06:56:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:33.023 06:56:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:33.023 06:56:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:33.023 06:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:33.023 { 00:22:33.023 "name": "nvme0n1", 00:22:33.023 "aliases": [ 00:22:33.023 "f37eb092-04c7-4916-9b74-d72f7d45e613" 00:22:33.023 ], 00:22:33.023 "product_name": "NVMe disk", 00:22:33.023 "block_size": 4096, 00:22:33.023 "num_blocks": 1310720, 00:22:33.023 "uuid": "f37eb092-04c7-4916-9b74-d72f7d45e613", 00:22:33.023 "numa_id": -1, 00:22:33.023 "assigned_rate_limits": { 00:22:33.023 "rw_ios_per_sec": 0, 00:22:33.023 "rw_mbytes_per_sec": 0, 00:22:33.023 "r_mbytes_per_sec": 0, 00:22:33.023 "w_mbytes_per_sec": 0 00:22:33.023 }, 00:22:33.023 "claimed": true, 00:22:33.023 "claim_type": "read_many_write_one", 00:22:33.023 "zoned": false, 00:22:33.023 "supported_io_types": { 00:22:33.023 "read": true, 00:22:33.023 "write": true, 00:22:33.023 "unmap": true, 00:22:33.023 "flush": true, 00:22:33.023 "reset": true, 00:22:33.023 "nvme_admin": true, 00:22:33.023 "nvme_io": true, 00:22:33.023 "nvme_io_md": false, 00:22:33.023 "write_zeroes": true, 00:22:33.023 "zcopy": false, 00:22:33.023 "get_zone_info": false, 00:22:33.023 "zone_management": false, 00:22:33.023 "zone_append": false, 00:22:33.023 "compare": true, 00:22:33.023 "compare_and_write": false, 00:22:33.023 "abort": true, 00:22:33.023 "seek_hole": false, 00:22:33.023 "seek_data": false, 00:22:33.023 "copy": true, 00:22:33.023 "nvme_iov_md": false 00:22:33.023 }, 00:22:33.023 "driver_specific": { 00:22:33.023 "nvme": [ 00:22:33.023 { 00:22:33.023 "pci_address": "0000:00:11.0", 00:22:33.023 "trid": { 00:22:33.023 "trtype": "PCIe", 00:22:33.023 "traddr": "0000:00:11.0" 00:22:33.023 }, 00:22:33.023 "ctrlr_data": { 00:22:33.023 "cntlid": 0, 00:22:33.023 "vendor_id": "0x1b36", 00:22:33.023 "model_number": "QEMU NVMe Ctrl", 00:22:33.023 "serial_number": "12341", 00:22:33.023 "firmware_revision": "8.0.0", 00:22:33.023 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:33.023 "oacs": { 00:22:33.023 "security": 0, 00:22:33.023 "format": 1, 00:22:33.023 "firmware": 0, 00:22:33.023 "ns_manage": 1 00:22:33.023 }, 00:22:33.023 "multi_ctrlr": false, 00:22:33.023 "ana_reporting": false 00:22:33.023 }, 00:22:33.023 "vs": { 00:22:33.023 "nvme_version": "1.4" 00:22:33.023 }, 00:22:33.023 "ns_data": { 00:22:33.023 "id": 1, 00:22:33.023 "can_share": false 00:22:33.023 } 00:22:33.023 } 00:22:33.023 ], 00:22:33.023 "mp_policy": "active_passive" 00:22:33.023 } 00:22:33.023 } 00:22:33.023 ]' 00:22:33.023 06:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:33.023 06:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:33.023 06:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:33.285 06:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:33.285 06:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:33.285 06:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:22:33.285 06:56:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:22:33.285 06:56:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:33.285 06:56:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:22:33.285 06:56:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:33.285 06:56:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:33.285 06:56:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=aadf3dae-766f-40c8-978b-74407abed8dd 00:22:33.285 06:56:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:22:33.285 06:56:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u aadf3dae-766f-40c8-978b-74407abed8dd 00:22:33.546 06:56:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:33.806 06:56:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=100ec8c3-0934-4ef7-bc5a-07ddc3d4c1d2 00:22:33.806 06:56:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 100ec8c3-0934-4ef7-bc5a-07ddc3d4c1d2 00:22:34.065 06:56:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=b23076ac-a86a-4a0a-a8f9-fcc5b7d6b385 00:22:34.065 06:56:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:22:34.065 06:56:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b23076ac-a86a-4a0a-a8f9-fcc5b7d6b385 00:22:34.065 06:56:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:22:34.065 06:56:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:34.065 06:56:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=b23076ac-a86a-4a0a-a8f9-fcc5b7d6b385 00:22:34.065 06:56:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:22:34.065 06:56:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size b23076ac-a86a-4a0a-a8f9-fcc5b7d6b385 00:22:34.065 06:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=b23076ac-a86a-4a0a-a8f9-fcc5b7d6b385 00:22:34.065 06:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:34.065 06:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:34.065 06:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:34.065 06:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b23076ac-a86a-4a0a-a8f9-fcc5b7d6b385 00:22:34.326 06:56:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:34.326 { 00:22:34.326 "name": "b23076ac-a86a-4a0a-a8f9-fcc5b7d6b385", 00:22:34.326 "aliases": [ 00:22:34.326 "lvs/nvme0n1p0" 00:22:34.326 ], 00:22:34.326 "product_name": "Logical Volume", 00:22:34.326 "block_size": 4096, 00:22:34.326 "num_blocks": 26476544, 00:22:34.326 "uuid": "b23076ac-a86a-4a0a-a8f9-fcc5b7d6b385", 00:22:34.326 "assigned_rate_limits": { 00:22:34.326 "rw_ios_per_sec": 0, 00:22:34.326 "rw_mbytes_per_sec": 0, 00:22:34.326 "r_mbytes_per_sec": 0, 00:22:34.326 "w_mbytes_per_sec": 0 00:22:34.326 }, 00:22:34.326 "claimed": false, 00:22:34.326 "zoned": false, 00:22:34.326 "supported_io_types": { 00:22:34.326 "read": true, 00:22:34.326 "write": true, 00:22:34.326 "unmap": true, 00:22:34.326 "flush": false, 00:22:34.326 "reset": true, 00:22:34.326 "nvme_admin": false, 00:22:34.326 "nvme_io": false, 00:22:34.326 "nvme_io_md": false, 00:22:34.326 "write_zeroes": true, 00:22:34.326 "zcopy": false, 00:22:34.326 "get_zone_info": false, 00:22:34.326 "zone_management": false, 00:22:34.326 "zone_append": false, 00:22:34.327 "compare": false, 00:22:34.327 "compare_and_write": false, 00:22:34.327 "abort": false, 00:22:34.327 "seek_hole": true, 00:22:34.327 "seek_data": true, 00:22:34.327 "copy": false, 00:22:34.327 "nvme_iov_md": false 00:22:34.327 }, 00:22:34.327 "driver_specific": { 00:22:34.327 "lvol": { 00:22:34.327 "lvol_store_uuid": "100ec8c3-0934-4ef7-bc5a-07ddc3d4c1d2", 00:22:34.327 "base_bdev": "nvme0n1", 00:22:34.327 "thin_provision": true, 00:22:34.327 "num_allocated_clusters": 0, 00:22:34.327 "snapshot": false, 00:22:34.327 "clone": false, 00:22:34.327 "esnap_clone": false 00:22:34.327 } 00:22:34.327 } 00:22:34.327 } 00:22:34.327 ]' 00:22:34.327 06:56:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:34.327 06:56:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:34.327 06:56:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:34.327 06:56:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:34.327 06:56:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:34.327 06:56:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:22:34.327 06:56:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:22:34.327 06:56:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:22:34.327 06:56:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:34.587 06:56:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:34.587 06:56:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:34.587 06:56:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size b23076ac-a86a-4a0a-a8f9-fcc5b7d6b385 00:22:34.587 06:56:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=b23076ac-a86a-4a0a-a8f9-fcc5b7d6b385 00:22:34.587 06:56:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:34.587 06:56:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:34.587 06:56:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:34.587 06:56:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b23076ac-a86a-4a0a-a8f9-fcc5b7d6b385 00:22:34.848 06:56:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:34.848 { 00:22:34.848 "name": "b23076ac-a86a-4a0a-a8f9-fcc5b7d6b385", 00:22:34.848 "aliases": [ 00:22:34.848 "lvs/nvme0n1p0" 00:22:34.848 ], 00:22:34.848 "product_name": "Logical Volume", 00:22:34.848 "block_size": 4096, 00:22:34.848 "num_blocks": 26476544, 00:22:34.848 "uuid": "b23076ac-a86a-4a0a-a8f9-fcc5b7d6b385", 00:22:34.848 "assigned_rate_limits": { 00:22:34.848 "rw_ios_per_sec": 0, 00:22:34.848 "rw_mbytes_per_sec": 0, 00:22:34.848 "r_mbytes_per_sec": 0, 00:22:34.848 "w_mbytes_per_sec": 0 00:22:34.849 }, 00:22:34.849 "claimed": false, 00:22:34.849 "zoned": false, 00:22:34.849 "supported_io_types": { 00:22:34.849 "read": true, 00:22:34.849 "write": true, 00:22:34.849 "unmap": true, 00:22:34.849 "flush": false, 00:22:34.849 "reset": true, 00:22:34.849 "nvme_admin": false, 00:22:34.849 "nvme_io": false, 00:22:34.849 "nvme_io_md": false, 00:22:34.849 "write_zeroes": true, 00:22:34.849 "zcopy": false, 00:22:34.849 "get_zone_info": false, 00:22:34.849 "zone_management": false, 00:22:34.849 "zone_append": false, 00:22:34.849 "compare": false, 00:22:34.849 "compare_and_write": false, 00:22:34.849 "abort": false, 00:22:34.849 "seek_hole": true, 00:22:34.849 "seek_data": true, 00:22:34.849 "copy": false, 00:22:34.849 "nvme_iov_md": false 00:22:34.849 }, 00:22:34.849 "driver_specific": { 00:22:34.849 "lvol": { 00:22:34.849 "lvol_store_uuid": "100ec8c3-0934-4ef7-bc5a-07ddc3d4c1d2", 00:22:34.849 "base_bdev": "nvme0n1", 00:22:34.849 "thin_provision": true, 00:22:34.849 "num_allocated_clusters": 0, 00:22:34.849 "snapshot": false, 00:22:34.849 "clone": false, 00:22:34.849 "esnap_clone": false 00:22:34.849 } 00:22:34.849 } 00:22:34.849 } 00:22:34.849 ]' 00:22:34.849 06:56:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:34.849 06:56:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:34.849 06:56:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:34.849 06:56:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:34.849 06:56:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:34.849 06:56:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:22:34.849 06:56:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:22:34.849 06:56:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:35.110 06:56:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:35.110 06:56:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size b23076ac-a86a-4a0a-a8f9-fcc5b7d6b385 00:22:35.110 06:56:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=b23076ac-a86a-4a0a-a8f9-fcc5b7d6b385 00:22:35.110 06:56:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:35.110 06:56:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:35.110 06:56:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:35.110 06:56:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b23076ac-a86a-4a0a-a8f9-fcc5b7d6b385 00:22:35.371 06:56:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:35.371 { 00:22:35.371 "name": "b23076ac-a86a-4a0a-a8f9-fcc5b7d6b385", 00:22:35.371 "aliases": [ 00:22:35.371 "lvs/nvme0n1p0" 00:22:35.371 ], 00:22:35.371 "product_name": "Logical Volume", 00:22:35.371 "block_size": 4096, 00:22:35.371 "num_blocks": 26476544, 00:22:35.371 "uuid": "b23076ac-a86a-4a0a-a8f9-fcc5b7d6b385", 00:22:35.371 "assigned_rate_limits": { 00:22:35.371 "rw_ios_per_sec": 0, 00:22:35.371 "rw_mbytes_per_sec": 0, 00:22:35.371 "r_mbytes_per_sec": 0, 00:22:35.371 "w_mbytes_per_sec": 0 00:22:35.371 }, 00:22:35.371 "claimed": false, 00:22:35.371 "zoned": false, 00:22:35.371 "supported_io_types": { 00:22:35.371 "read": true, 00:22:35.371 "write": true, 00:22:35.371 "unmap": true, 00:22:35.371 "flush": false, 00:22:35.371 "reset": true, 00:22:35.371 "nvme_admin": false, 00:22:35.371 "nvme_io": false, 00:22:35.371 "nvme_io_md": false, 00:22:35.371 "write_zeroes": true, 00:22:35.371 "zcopy": false, 00:22:35.371 "get_zone_info": false, 00:22:35.371 "zone_management": false, 00:22:35.371 "zone_append": false, 00:22:35.371 "compare": false, 00:22:35.371 "compare_and_write": false, 00:22:35.371 "abort": false, 00:22:35.371 "seek_hole": true, 00:22:35.371 "seek_data": true, 00:22:35.371 "copy": false, 00:22:35.371 "nvme_iov_md": false 00:22:35.371 }, 00:22:35.371 "driver_specific": { 00:22:35.371 "lvol": { 00:22:35.371 "lvol_store_uuid": "100ec8c3-0934-4ef7-bc5a-07ddc3d4c1d2", 00:22:35.371 "base_bdev": "nvme0n1", 00:22:35.371 "thin_provision": true, 00:22:35.371 "num_allocated_clusters": 0, 00:22:35.371 "snapshot": false, 00:22:35.371 "clone": false, 00:22:35.371 "esnap_clone": false 00:22:35.371 } 00:22:35.371 } 00:22:35.371 } 00:22:35.371 ]' 00:22:35.371 06:56:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:35.371 06:56:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:35.371 06:56:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:35.371 06:56:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:35.371 06:56:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:35.371 06:56:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:22:35.372 06:56:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:35.372 06:56:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d b23076ac-a86a-4a0a-a8f9-fcc5b7d6b385 --l2p_dram_limit 10' 00:22:35.372 06:56:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:35.372 06:56:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:22:35.372 06:56:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:35.372 06:56:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b23076ac-a86a-4a0a-a8f9-fcc5b7d6b385 --l2p_dram_limit 10 -c nvc0n1p0 00:22:35.634 [2024-11-18 06:56:28.488550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.634 [2024-11-18 06:56:28.488592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:35.634 [2024-11-18 06:56:28.488603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:35.634 [2024-11-18 06:56:28.488611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.634 [2024-11-18 06:56:28.488652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.634 [2024-11-18 06:56:28.488661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:35.634 [2024-11-18 06:56:28.488669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:22:35.634 [2024-11-18 06:56:28.488678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.634 [2024-11-18 06:56:28.488695] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:35.634 [2024-11-18 06:56:28.488894] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:35.634 [2024-11-18 06:56:28.488907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.634 [2024-11-18 06:56:28.488915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:35.634 [2024-11-18 06:56:28.488921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:22:35.634 [2024-11-18 06:56:28.488928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.634 [2024-11-18 06:56:28.488992] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 77747828-95d0-4e37-a793-0f30a108e4e5 00:22:35.634 [2024-11-18 06:56:28.489943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.634 [2024-11-18 06:56:28.489966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:35.634 [2024-11-18 06:56:28.489987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:22:35.634 [2024-11-18 06:56:28.489993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.634 [2024-11-18 06:56:28.494684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.634 [2024-11-18 06:56:28.494716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:35.634 [2024-11-18 06:56:28.494726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.657 ms 00:22:35.634 [2024-11-18 06:56:28.494733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.634 [2024-11-18 06:56:28.494791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.634 [2024-11-18 06:56:28.494800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:35.634 [2024-11-18 06:56:28.494811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:22:35.634 [2024-11-18 06:56:28.494819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.634 [2024-11-18 06:56:28.494863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.634 [2024-11-18 06:56:28.494870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:35.634 [2024-11-18 06:56:28.494877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:35.634 [2024-11-18 06:56:28.494883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.634 [2024-11-18 06:56:28.494903] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:35.634 [2024-11-18 06:56:28.496176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.634 [2024-11-18 06:56:28.496282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:35.634 [2024-11-18 06:56:28.496294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.279 ms 00:22:35.634 [2024-11-18 06:56:28.496302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.634 [2024-11-18 06:56:28.496331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.634 [2024-11-18 06:56:28.496341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:35.634 [2024-11-18 06:56:28.496347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:35.634 [2024-11-18 06:56:28.496356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.634 [2024-11-18 06:56:28.496374] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:35.634 [2024-11-18 06:56:28.496483] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:35.634 [2024-11-18 06:56:28.496492] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:35.634 [2024-11-18 06:56:28.496506] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:35.634 [2024-11-18 06:56:28.496516] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:35.634 [2024-11-18 06:56:28.496528] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:35.634 [2024-11-18 06:56:28.496534] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:35.635 [2024-11-18 06:56:28.496543] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:35.635 [2024-11-18 06:56:28.496548] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:35.635 [2024-11-18 06:56:28.496555] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:35.635 [2024-11-18 06:56:28.496561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.635 [2024-11-18 06:56:28.496568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:35.635 [2024-11-18 06:56:28.496574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:22:35.635 [2024-11-18 06:56:28.496581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.635 [2024-11-18 06:56:28.496646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.635 [2024-11-18 06:56:28.496656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:35.635 [2024-11-18 06:56:28.496661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:35.635 [2024-11-18 06:56:28.496668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.635 [2024-11-18 06:56:28.496744] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:35.635 [2024-11-18 06:56:28.496755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:35.635 [2024-11-18 06:56:28.496761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:35.635 [2024-11-18 06:56:28.496768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:35.635 [2024-11-18 06:56:28.496773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:35.635 [2024-11-18 06:56:28.496781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:35.635 [2024-11-18 06:56:28.496786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:35.635 [2024-11-18 06:56:28.496793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:35.635 [2024-11-18 06:56:28.496798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:35.635 [2024-11-18 06:56:28.496804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:35.635 [2024-11-18 06:56:28.496809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:35.635 [2024-11-18 06:56:28.496816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:35.635 [2024-11-18 06:56:28.496821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:35.635 [2024-11-18 06:56:28.496829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:35.635 [2024-11-18 06:56:28.496835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:35.635 [2024-11-18 06:56:28.496841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:35.635 [2024-11-18 06:56:28.496846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:35.635 [2024-11-18 06:56:28.496852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:35.635 [2024-11-18 06:56:28.496858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:35.635 [2024-11-18 06:56:28.496864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:35.635 [2024-11-18 06:56:28.496871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:35.635 [2024-11-18 06:56:28.496879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:35.635 [2024-11-18 06:56:28.496885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:35.635 [2024-11-18 06:56:28.496892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:35.635 [2024-11-18 06:56:28.496897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:35.635 [2024-11-18 06:56:28.496905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:35.635 [2024-11-18 06:56:28.496911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:35.635 [2024-11-18 06:56:28.496918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:35.635 [2024-11-18 06:56:28.496924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:35.635 [2024-11-18 06:56:28.496932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:35.635 [2024-11-18 06:56:28.496938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:35.635 [2024-11-18 06:56:28.496946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:35.635 [2024-11-18 06:56:28.496952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:35.635 [2024-11-18 06:56:28.496959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:35.635 [2024-11-18 06:56:28.496965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:35.635 [2024-11-18 06:56:28.496972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:35.635 [2024-11-18 06:56:28.496988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:35.635 [2024-11-18 06:56:28.496995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:35.635 [2024-11-18 06:56:28.497001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:35.635 [2024-11-18 06:56:28.497008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:35.635 [2024-11-18 06:56:28.497014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:35.635 [2024-11-18 06:56:28.497021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:35.635 [2024-11-18 06:56:28.497027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:35.635 [2024-11-18 06:56:28.497034] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:35.635 [2024-11-18 06:56:28.497040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:35.635 [2024-11-18 06:56:28.497049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:35.635 [2024-11-18 06:56:28.497055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:35.635 [2024-11-18 06:56:28.497064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:35.635 [2024-11-18 06:56:28.497070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:35.635 [2024-11-18 06:56:28.497077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:35.635 [2024-11-18 06:56:28.497083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:35.635 [2024-11-18 06:56:28.497090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:35.635 [2024-11-18 06:56:28.497097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:35.635 [2024-11-18 06:56:28.497106] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:35.635 [2024-11-18 06:56:28.497116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:35.635 [2024-11-18 06:56:28.497126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:35.635 [2024-11-18 06:56:28.497132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:35.635 [2024-11-18 06:56:28.497140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:35.635 [2024-11-18 06:56:28.497146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:35.635 [2024-11-18 06:56:28.497153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:35.635 [2024-11-18 06:56:28.497159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:35.635 [2024-11-18 06:56:28.497168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:35.635 [2024-11-18 06:56:28.497174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:35.635 [2024-11-18 06:56:28.497181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:35.635 [2024-11-18 06:56:28.497188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:35.635 [2024-11-18 06:56:28.497195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:35.635 [2024-11-18 06:56:28.497201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:35.635 [2024-11-18 06:56:28.497208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:35.635 [2024-11-18 06:56:28.497215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:35.635 [2024-11-18 06:56:28.497223] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:35.635 [2024-11-18 06:56:28.497232] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:35.635 [2024-11-18 06:56:28.497240] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:35.635 [2024-11-18 06:56:28.497246] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:35.635 [2024-11-18 06:56:28.497253] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:35.635 [2024-11-18 06:56:28.497259] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:35.635 [2024-11-18 06:56:28.497266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.635 [2024-11-18 06:56:28.497271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:35.635 [2024-11-18 06:56:28.497279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:22:35.635 [2024-11-18 06:56:28.497284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.635 [2024-11-18 06:56:28.497314] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:35.635 [2024-11-18 06:56:28.497321] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:39.844 [2024-11-18 06:56:32.378971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.844 [2024-11-18 06:56:32.379348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:39.844 [2024-11-18 06:56:32.379634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3881.631 ms 00:22:39.844 [2024-11-18 06:56:32.379665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.844 [2024-11-18 06:56:32.394450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.844 [2024-11-18 06:56:32.394671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:39.844 [2024-11-18 06:56:32.394912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.643 ms 00:22:39.844 [2024-11-18 06:56:32.394940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.844 [2024-11-18 06:56:32.395103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.844 [2024-11-18 06:56:32.395293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:39.844 [2024-11-18 06:56:32.395329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:22:39.844 [2024-11-18 06:56:32.395352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.844 [2024-11-18 06:56:32.408331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.844 [2024-11-18 06:56:32.408517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:39.844 [2024-11-18 06:56:32.408595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.898 ms 00:22:39.844 [2024-11-18 06:56:32.408619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.844 [2024-11-18 06:56:32.408675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.844 [2024-11-18 06:56:32.408700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:39.844 [2024-11-18 06:56:32.408723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:39.844 [2024-11-18 06:56:32.408742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.844 [2024-11-18 06:56:32.409329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.844 [2024-11-18 06:56:32.409525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:39.844 [2024-11-18 06:56:32.409596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:22:39.845 [2024-11-18 06:56:32.409620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.845 [2024-11-18 06:56:32.409750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.845 [2024-11-18 06:56:32.409783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:39.845 [2024-11-18 06:56:32.409852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:22:39.845 [2024-11-18 06:56:32.409876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.845 [2024-11-18 06:56:32.418483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.845 [2024-11-18 06:56:32.418650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:39.845 [2024-11-18 06:56:32.418736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.559 ms 00:22:39.845 [2024-11-18 06:56:32.418763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.845 [2024-11-18 06:56:32.428627] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:39.845 [2024-11-18 06:56:32.432659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.845 [2024-11-18 06:56:32.432821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:39.845 [2024-11-18 06:56:32.432876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.758 ms 00:22:39.845 [2024-11-18 06:56:32.432902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.845 [2024-11-18 06:56:32.524438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.845 [2024-11-18 06:56:32.524669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:39.845 [2024-11-18 06:56:32.524755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.489 ms 00:22:39.845 [2024-11-18 06:56:32.524787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.845 [2024-11-18 06:56:32.525083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.845 [2024-11-18 06:56:32.525125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:39.845 [2024-11-18 06:56:32.525204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:22:39.845 [2024-11-18 06:56:32.525230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.845 [2024-11-18 06:56:32.531321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.845 [2024-11-18 06:56:32.531507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:39.845 [2024-11-18 06:56:32.531570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.018 ms 00:22:39.845 [2024-11-18 06:56:32.531601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.845 [2024-11-18 06:56:32.536789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.845 [2024-11-18 06:56:32.536960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:39.845 [2024-11-18 06:56:32.537143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.133 ms 00:22:39.845 [2024-11-18 06:56:32.537172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.845 [2024-11-18 06:56:32.537521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.845 [2024-11-18 06:56:32.537858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:39.845 [2024-11-18 06:56:32.537902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:22:39.845 [2024-11-18 06:56:32.537928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.845 [2024-11-18 06:56:32.579853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.845 [2024-11-18 06:56:32.580050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:39.845 [2024-11-18 06:56:32.580234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.880 ms 00:22:39.845 [2024-11-18 06:56:32.580265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.845 [2024-11-18 06:56:32.587459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.845 [2024-11-18 06:56:32.587520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:39.845 [2024-11-18 06:56:32.587533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.108 ms 00:22:39.845 [2024-11-18 06:56:32.587545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.845 [2024-11-18 06:56:32.593337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.845 [2024-11-18 06:56:32.593389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:39.845 [2024-11-18 06:56:32.593401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.742 ms 00:22:39.845 [2024-11-18 06:56:32.593411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.845 [2024-11-18 06:56:32.599428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.845 [2024-11-18 06:56:32.599482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:39.845 [2024-11-18 06:56:32.599493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.970 ms 00:22:39.845 [2024-11-18 06:56:32.599507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.845 [2024-11-18 06:56:32.599558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.845 [2024-11-18 06:56:32.599571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:39.845 [2024-11-18 06:56:32.599581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:39.845 [2024-11-18 06:56:32.599599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.845 [2024-11-18 06:56:32.599673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.845 [2024-11-18 06:56:32.599688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:39.845 [2024-11-18 06:56:32.599697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:39.845 [2024-11-18 06:56:32.599712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.845 [2024-11-18 06:56:32.600800] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4111.791 ms, result 0 00:22:39.845 { 00:22:39.845 "name": "ftl0", 00:22:39.845 "uuid": "77747828-95d0-4e37-a793-0f30a108e4e5" 00:22:39.845 } 00:22:39.845 06:56:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:39.845 06:56:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:39.845 06:56:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:39.845 06:56:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:39.845 06:56:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:40.106 /dev/nbd0 00:22:40.106 06:56:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:40.106 06:56:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:40.106 06:56:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:22:40.106 06:56:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:40.106 06:56:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:40.106 06:56:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:40.106 06:56:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:22:40.106 06:56:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:40.106 06:56:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:40.106 06:56:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:40.106 1+0 records in 00:22:40.106 1+0 records out 00:22:40.106 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472041 s, 8.7 MB/s 00:22:40.106 06:56:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:40.106 06:56:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:22:40.106 06:56:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:40.106 06:56:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:40.106 06:56:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:22:40.106 06:56:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:40.106 [2024-11-18 06:56:33.170754] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:22:40.106 [2024-11-18 06:56:33.170896] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89024 ] 00:22:40.368 [2024-11-18 06:56:33.330257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.368 [2024-11-18 06:56:33.358499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.757  [2024-11-18T06:56:35.788Z] Copying: 190/1024 [MB] (190 MBps) [2024-11-18T06:56:36.730Z] Copying: 381/1024 [MB] (190 MBps) [2024-11-18T06:56:37.673Z] Copying: 580/1024 [MB] (199 MBps) [2024-11-18T06:56:38.246Z] Copying: 836/1024 [MB] (255 MBps) [2024-11-18T06:56:38.507Z] Copying: 1024/1024 [MB] (average 216 MBps) 00:22:45.420 00:22:45.420 06:56:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:47.362 06:56:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:47.362 [2024-11-18 06:56:40.428341] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:22:47.362 [2024-11-18 06:56:40.428464] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89104 ] 00:22:47.624 [2024-11-18 06:56:40.583759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.624 [2024-11-18 06:56:40.600258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.567  [2024-11-18T06:56:43.041Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-18T06:56:43.988Z] Copying: 31/1024 [MB] (15 MBps) [2024-11-18T06:56:44.931Z] Copying: 49/1024 [MB] (18 MBps) [2024-11-18T06:56:45.873Z] Copying: 72/1024 [MB] (23 MBps) [2024-11-18T06:56:46.815Z] Copying: 88/1024 [MB] (16 MBps) [2024-11-18T06:56:47.758Z] Copying: 106/1024 [MB] (17 MBps) [2024-11-18T06:56:48.702Z] Copying: 124/1024 [MB] (17 MBps) [2024-11-18T06:56:49.648Z] Copying: 149/1024 [MB] (25 MBps) [2024-11-18T06:56:51.033Z] Copying: 170/1024 [MB] (20 MBps) [2024-11-18T06:56:51.977Z] Copying: 191/1024 [MB] (20 MBps) [2024-11-18T06:56:52.920Z] Copying: 211/1024 [MB] (20 MBps) [2024-11-18T06:56:53.864Z] Copying: 245/1024 [MB] (34 MBps) [2024-11-18T06:56:54.806Z] Copying: 269/1024 [MB] (23 MBps) [2024-11-18T06:56:55.751Z] Copying: 292/1024 [MB] (23 MBps) [2024-11-18T06:56:56.695Z] Copying: 315/1024 [MB] (22 MBps) [2024-11-18T06:56:58.081Z] Copying: 334/1024 [MB] (19 MBps) [2024-11-18T06:56:58.653Z] Copying: 351/1024 [MB] (16 MBps) [2024-11-18T06:57:00.040Z] Copying: 376/1024 [MB] (25 MBps) [2024-11-18T06:57:00.984Z] Copying: 405/1024 [MB] (29 MBps) [2024-11-18T06:57:01.926Z] Copying: 422/1024 [MB] (16 MBps) [2024-11-18T06:57:02.873Z] Copying: 440/1024 [MB] (18 MBps) [2024-11-18T06:57:03.816Z] Copying: 456/1024 [MB] (15 MBps) [2024-11-18T06:57:04.758Z] Copying: 469/1024 [MB] (13 MBps) [2024-11-18T06:57:05.703Z] Copying: 484/1024 [MB] (14 MBps) [2024-11-18T06:57:07.089Z] Copying: 495/1024 [MB] (11 MBps) [2024-11-18T06:57:07.663Z] Copying: 506/1024 [MB] (11 MBps) [2024-11-18T06:57:09.109Z] Copying: 518/1024 [MB] (12 MBps) [2024-11-18T06:57:09.714Z] Copying: 530/1024 [MB] (12 MBps) [2024-11-18T06:57:10.657Z] Copying: 541/1024 [MB] (10 MBps) [2024-11-18T06:57:12.031Z] Copying: 556/1024 [MB] (15 MBps) [2024-11-18T06:57:12.965Z] Copying: 584/1024 [MB] (27 MBps) [2024-11-18T06:57:13.899Z] Copying: 611/1024 [MB] (26 MBps) [2024-11-18T06:57:14.832Z] Copying: 634/1024 [MB] (23 MBps) [2024-11-18T06:57:15.765Z] Copying: 656/1024 [MB] (21 MBps) [2024-11-18T06:57:16.699Z] Copying: 683/1024 [MB] (27 MBps) [2024-11-18T06:57:18.071Z] Copying: 706/1024 [MB] (22 MBps) [2024-11-18T06:57:19.004Z] Copying: 730/1024 [MB] (24 MBps) [2024-11-18T06:57:19.935Z] Copying: 757/1024 [MB] (26 MBps) [2024-11-18T06:57:20.870Z] Copying: 784/1024 [MB] (27 MBps) [2024-11-18T06:57:21.812Z] Copying: 809/1024 [MB] (25 MBps) [2024-11-18T06:57:22.755Z] Copying: 831/1024 [MB] (21 MBps) [2024-11-18T06:57:23.697Z] Copying: 853/1024 [MB] (22 MBps) [2024-11-18T06:57:25.080Z] Copying: 881/1024 [MB] (27 MBps) [2024-11-18T06:57:25.648Z] Copying: 911/1024 [MB] (30 MBps) [2024-11-18T06:57:27.035Z] Copying: 946/1024 [MB] (35 MBps) [2024-11-18T06:57:27.978Z] Copying: 970/1024 [MB] (23 MBps) [2024-11-18T06:57:28.923Z] Copying: 989/1024 [MB] (19 MBps) [2024-11-18T06:57:28.923Z] Copying: 1018/1024 [MB] (28 MBps) [2024-11-18T06:57:29.184Z] Copying: 1024/1024 [MB] (average 21 MBps) 00:23:36.097 00:23:36.097 06:57:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:36.097 06:57:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:36.097 06:57:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:36.360 [2024-11-18 06:57:29.329096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.360 [2024-11-18 06:57:29.329163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:36.360 [2024-11-18 06:57:29.329184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:36.360 [2024-11-18 06:57:29.329194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.360 [2024-11-18 06:57:29.329222] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:36.360 [2024-11-18 06:57:29.330176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.360 [2024-11-18 06:57:29.330217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:36.360 [2024-11-18 06:57:29.330229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.935 ms 00:23:36.360 [2024-11-18 06:57:29.330240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.360 [2024-11-18 06:57:29.333518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.360 [2024-11-18 06:57:29.333572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:36.360 [2024-11-18 06:57:29.333585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.248 ms 00:23:36.360 [2024-11-18 06:57:29.333602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.360 [2024-11-18 06:57:29.353898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.360 [2024-11-18 06:57:29.354113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:36.361 [2024-11-18 06:57:29.354291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.275 ms 00:23:36.361 [2024-11-18 06:57:29.354311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.361 [2024-11-18 06:57:29.360519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.361 [2024-11-18 06:57:29.360570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:36.361 [2024-11-18 06:57:29.360582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.162 ms 00:23:36.361 [2024-11-18 06:57:29.360596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.361 [2024-11-18 06:57:29.363681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.361 [2024-11-18 06:57:29.363880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:36.361 [2024-11-18 06:57:29.363899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.998 ms 00:23:36.361 [2024-11-18 06:57:29.363915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.361 [2024-11-18 06:57:29.371113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.361 [2024-11-18 06:57:29.371180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:36.361 [2024-11-18 06:57:29.371193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.152 ms 00:23:36.361 [2024-11-18 06:57:29.371206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.361 [2024-11-18 06:57:29.371365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.361 [2024-11-18 06:57:29.371383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:36.361 [2024-11-18 06:57:29.371395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:23:36.361 [2024-11-18 06:57:29.371407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.361 [2024-11-18 06:57:29.374300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.361 [2024-11-18 06:57:29.374461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:36.361 [2024-11-18 06:57:29.374518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.871 ms 00:23:36.361 [2024-11-18 06:57:29.374545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.361 [2024-11-18 06:57:29.377365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.361 [2024-11-18 06:57:29.377533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:36.361 [2024-11-18 06:57:29.377589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.768 ms 00:23:36.361 [2024-11-18 06:57:29.377615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.361 [2024-11-18 06:57:29.379670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.361 [2024-11-18 06:57:29.379831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:36.361 [2024-11-18 06:57:29.379896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.004 ms 00:23:36.361 [2024-11-18 06:57:29.379909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.361 [2024-11-18 06:57:29.382096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.361 [2024-11-18 06:57:29.382144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:36.361 [2024-11-18 06:57:29.382154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.086 ms 00:23:36.361 [2024-11-18 06:57:29.382165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.361 [2024-11-18 06:57:29.382411] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:36.361 [2024-11-18 06:57:29.382562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.382607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.382637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.382663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.382776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.382806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.382837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.382864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.382893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.382919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.382948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.383967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:36.361 [2024-11-18 06:57:29.384767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:36.362 [2024-11-18 06:57:29.384788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:36.362 [2024-11-18 06:57:29.384808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:36.362 [2024-11-18 06:57:29.384831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:36.362 [2024-11-18 06:57:29.384850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:36.362 [2024-11-18 06:57:29.384874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:36.362 [2024-11-18 06:57:29.384891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:36.362 [2024-11-18 06:57:29.384913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:36.362 [2024-11-18 06:57:29.384931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:36.362 [2024-11-18 06:57:29.384954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:36.362 [2024-11-18 06:57:29.385002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:36.362 [2024-11-18 06:57:29.385048] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:36.362 [2024-11-18 06:57:29.385090] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 77747828-95d0-4e37-a793-0f30a108e4e5 00:23:36.362 [2024-11-18 06:57:29.385113] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:36.362 [2024-11-18 06:57:29.385131] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:36.362 [2024-11-18 06:57:29.385165] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:36.362 [2024-11-18 06:57:29.385184] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:36.362 [2024-11-18 06:57:29.385204] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:36.362 [2024-11-18 06:57:29.385229] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:36.362 [2024-11-18 06:57:29.385249] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:36.362 [2024-11-18 06:57:29.385264] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:36.362 [2024-11-18 06:57:29.385284] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:36.362 [2024-11-18 06:57:29.385303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.362 [2024-11-18 06:57:29.385325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:36.362 [2024-11-18 06:57:29.385349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.905 ms 00:23:36.362 [2024-11-18 06:57:29.385374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.362 [2024-11-18 06:57:29.389128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.362 [2024-11-18 06:57:29.389210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:36.362 [2024-11-18 06:57:29.389231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.700 ms 00:23:36.362 [2024-11-18 06:57:29.389254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.362 [2024-11-18 06:57:29.389450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.362 [2024-11-18 06:57:29.389477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:36.362 [2024-11-18 06:57:29.389502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:23:36.362 [2024-11-18 06:57:29.389524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.362 [2024-11-18 06:57:29.401517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.362 [2024-11-18 06:57:29.401583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:36.362 [2024-11-18 06:57:29.401596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.362 [2024-11-18 06:57:29.401608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.362 [2024-11-18 06:57:29.401681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.362 [2024-11-18 06:57:29.401696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:36.362 [2024-11-18 06:57:29.401708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.362 [2024-11-18 06:57:29.401726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.362 [2024-11-18 06:57:29.401816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.362 [2024-11-18 06:57:29.401835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:36.362 [2024-11-18 06:57:29.401844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.362 [2024-11-18 06:57:29.401855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.362 [2024-11-18 06:57:29.401874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.362 [2024-11-18 06:57:29.401886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:36.362 [2024-11-18 06:57:29.401896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.362 [2024-11-18 06:57:29.401910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.362 [2024-11-18 06:57:29.421892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.362 [2024-11-18 06:57:29.421969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:36.362 [2024-11-18 06:57:29.422024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.362 [2024-11-18 06:57:29.422036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.362 [2024-11-18 06:57:29.438147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.362 [2024-11-18 06:57:29.438214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:36.362 [2024-11-18 06:57:29.438227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.362 [2024-11-18 06:57:29.438244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.362 [2024-11-18 06:57:29.438348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.362 [2024-11-18 06:57:29.438368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:36.362 [2024-11-18 06:57:29.438377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.362 [2024-11-18 06:57:29.438391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.362 [2024-11-18 06:57:29.438502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.362 [2024-11-18 06:57:29.438519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:36.362 [2024-11-18 06:57:29.438528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.362 [2024-11-18 06:57:29.438540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.362 [2024-11-18 06:57:29.438634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.362 [2024-11-18 06:57:29.438655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:36.362 [2024-11-18 06:57:29.438664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.362 [2024-11-18 06:57:29.438677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.362 [2024-11-18 06:57:29.438732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.362 [2024-11-18 06:57:29.438746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:36.362 [2024-11-18 06:57:29.438756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.362 [2024-11-18 06:57:29.438775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.362 [2024-11-18 06:57:29.438838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.362 [2024-11-18 06:57:29.438857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:36.362 [2024-11-18 06:57:29.438878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.362 [2024-11-18 06:57:29.438890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.362 [2024-11-18 06:57:29.438953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.362 [2024-11-18 06:57:29.438970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:36.362 [2024-11-18 06:57:29.439006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.362 [2024-11-18 06:57:29.439019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.362 [2024-11-18 06:57:29.439205] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 110.062 ms, result 0 00:23:36.624 true 00:23:36.624 06:57:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 88879 00:23:36.624 06:57:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid88879 00:23:36.624 06:57:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:36.624 [2024-11-18 06:57:29.532553] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:36.624 [2024-11-18 06:57:29.532709] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89625 ] 00:23:36.624 [2024-11-18 06:57:29.694280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.885 [2024-11-18 06:57:29.720612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.827  [2024-11-18T06:57:31.851Z] Copying: 186/1024 [MB] (186 MBps) [2024-11-18T06:57:33.226Z] Copying: 401/1024 [MB] (215 MBps) [2024-11-18T06:57:34.162Z] Copying: 657/1024 [MB] (255 MBps) [2024-11-18T06:57:34.421Z] Copying: 906/1024 [MB] (249 MBps) [2024-11-18T06:57:34.421Z] Copying: 1024/1024 [MB] (average 229 MBps) 00:23:41.334 00:23:41.594 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 88879 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:41.594 06:57:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:41.594 [2024-11-18 06:57:34.480147] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:41.594 [2024-11-18 06:57:34.480244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89681 ] 00:23:41.594 [2024-11-18 06:57:34.627631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.594 [2024-11-18 06:57:34.649252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.853 [2024-11-18 06:57:34.747219] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:41.853 [2024-11-18 06:57:34.747433] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:41.853 [2024-11-18 06:57:34.809666] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:41.853 [2024-11-18 06:57:34.810556] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:41.853 [2024-11-18 06:57:34.810970] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:42.113 [2024-11-18 06:57:35.133084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.113 [2024-11-18 06:57:35.133220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:42.113 [2024-11-18 06:57:35.133237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:42.113 [2024-11-18 06:57:35.133244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.113 [2024-11-18 06:57:35.133292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.113 [2024-11-18 06:57:35.133302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:42.113 [2024-11-18 06:57:35.133311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:23:42.113 [2024-11-18 06:57:35.133317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.113 [2024-11-18 06:57:35.133336] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:42.113 [2024-11-18 06:57:35.133528] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:42.113 [2024-11-18 06:57:35.133540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.113 [2024-11-18 06:57:35.133546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:42.113 [2024-11-18 06:57:35.133553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:23:42.113 [2024-11-18 06:57:35.133562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.113 [2024-11-18 06:57:35.134866] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:42.113 [2024-11-18 06:57:35.137657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.113 [2024-11-18 06:57:35.137687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:42.113 [2024-11-18 06:57:35.137695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.790 ms 00:23:42.113 [2024-11-18 06:57:35.137701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.113 [2024-11-18 06:57:35.137743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.113 [2024-11-18 06:57:35.137750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:42.113 [2024-11-18 06:57:35.137757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:42.113 [2024-11-18 06:57:35.137765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.113 [2024-11-18 06:57:35.143926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.113 [2024-11-18 06:57:35.143954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:42.113 [2024-11-18 06:57:35.143963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.117 ms 00:23:42.113 [2024-11-18 06:57:35.143972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.113 [2024-11-18 06:57:35.144051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.113 [2024-11-18 06:57:35.144059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:42.113 [2024-11-18 06:57:35.144065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:23:42.113 [2024-11-18 06:57:35.144071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.113 [2024-11-18 06:57:35.144101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.113 [2024-11-18 06:57:35.144112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:42.113 [2024-11-18 06:57:35.144118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:42.113 [2024-11-18 06:57:35.144125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.113 [2024-11-18 06:57:35.144158] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:42.113 [2024-11-18 06:57:35.145661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.113 [2024-11-18 06:57:35.145683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:42.113 [2024-11-18 06:57:35.145691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.508 ms 00:23:42.113 [2024-11-18 06:57:35.145701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.113 [2024-11-18 06:57:35.145726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.113 [2024-11-18 06:57:35.145733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:42.113 [2024-11-18 06:57:35.145739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:42.113 [2024-11-18 06:57:35.145745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.113 [2024-11-18 06:57:35.145759] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:42.113 [2024-11-18 06:57:35.145778] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:42.113 [2024-11-18 06:57:35.145807] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:42.113 [2024-11-18 06:57:35.145822] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:42.113 [2024-11-18 06:57:35.145904] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:42.113 [2024-11-18 06:57:35.145913] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:42.113 [2024-11-18 06:57:35.145921] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:42.113 [2024-11-18 06:57:35.145929] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:42.113 [2024-11-18 06:57:35.145935] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:42.113 [2024-11-18 06:57:35.145942] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:42.114 [2024-11-18 06:57:35.145947] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:42.114 [2024-11-18 06:57:35.145955] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:42.114 [2024-11-18 06:57:35.145962] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:42.114 [2024-11-18 06:57:35.145972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.114 [2024-11-18 06:57:35.145988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:42.114 [2024-11-18 06:57:35.145995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:23:42.114 [2024-11-18 06:57:35.146003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.114 [2024-11-18 06:57:35.146066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.114 [2024-11-18 06:57:35.146073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:42.114 [2024-11-18 06:57:35.146079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:23:42.114 [2024-11-18 06:57:35.146085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.114 [2024-11-18 06:57:35.146157] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:42.114 [2024-11-18 06:57:35.146172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:42.114 [2024-11-18 06:57:35.146179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:42.114 [2024-11-18 06:57:35.146185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:42.114 [2024-11-18 06:57:35.146191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:42.114 [2024-11-18 06:57:35.146200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:42.114 [2024-11-18 06:57:35.146205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:42.114 [2024-11-18 06:57:35.146212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:42.114 [2024-11-18 06:57:35.146217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:42.114 [2024-11-18 06:57:35.146222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:42.114 [2024-11-18 06:57:35.146227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:42.114 [2024-11-18 06:57:35.146232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:42.114 [2024-11-18 06:57:35.146243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:42.114 [2024-11-18 06:57:35.146249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:42.114 [2024-11-18 06:57:35.146254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:42.114 [2024-11-18 06:57:35.146259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:42.114 [2024-11-18 06:57:35.146264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:42.114 [2024-11-18 06:57:35.146269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:42.114 [2024-11-18 06:57:35.146274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:42.114 [2024-11-18 06:57:35.146279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:42.114 [2024-11-18 06:57:35.146284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:42.114 [2024-11-18 06:57:35.146295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:42.114 [2024-11-18 06:57:35.146302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:42.114 [2024-11-18 06:57:35.146307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:42.114 [2024-11-18 06:57:35.146313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:42.114 [2024-11-18 06:57:35.146319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:42.114 [2024-11-18 06:57:35.146325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:42.114 [2024-11-18 06:57:35.146331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:42.114 [2024-11-18 06:57:35.146337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:42.114 [2024-11-18 06:57:35.146343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:42.114 [2024-11-18 06:57:35.146349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:42.114 [2024-11-18 06:57:35.146354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:42.114 [2024-11-18 06:57:35.146360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:42.114 [2024-11-18 06:57:35.146366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:42.114 [2024-11-18 06:57:35.146371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:42.114 [2024-11-18 06:57:35.146377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:42.114 [2024-11-18 06:57:35.146382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:42.114 [2024-11-18 06:57:35.146390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:42.114 [2024-11-18 06:57:35.146396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:42.114 [2024-11-18 06:57:35.146402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:42.114 [2024-11-18 06:57:35.146407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:42.114 [2024-11-18 06:57:35.146413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:42.114 [2024-11-18 06:57:35.146418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:42.114 [2024-11-18 06:57:35.146424] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:42.114 [2024-11-18 06:57:35.146433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:42.114 [2024-11-18 06:57:35.146439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:42.114 [2024-11-18 06:57:35.146446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:42.114 [2024-11-18 06:57:35.146455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:42.114 [2024-11-18 06:57:35.146461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:42.114 [2024-11-18 06:57:35.146467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:42.114 [2024-11-18 06:57:35.146473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:42.114 [2024-11-18 06:57:35.146478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:42.114 [2024-11-18 06:57:35.146484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:42.114 [2024-11-18 06:57:35.146492] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:42.114 [2024-11-18 06:57:35.146503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:42.114 [2024-11-18 06:57:35.146512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:42.114 [2024-11-18 06:57:35.146519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:42.114 [2024-11-18 06:57:35.146525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:42.114 [2024-11-18 06:57:35.146532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:42.114 [2024-11-18 06:57:35.146538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:42.114 [2024-11-18 06:57:35.146545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:42.114 [2024-11-18 06:57:35.146551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:42.114 [2024-11-18 06:57:35.146557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:42.114 [2024-11-18 06:57:35.146564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:42.114 [2024-11-18 06:57:35.146570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:42.114 [2024-11-18 06:57:35.146577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:42.114 [2024-11-18 06:57:35.146583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:42.114 [2024-11-18 06:57:35.146590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:42.114 [2024-11-18 06:57:35.146596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:42.114 [2024-11-18 06:57:35.146604] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:42.114 [2024-11-18 06:57:35.146611] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:42.114 [2024-11-18 06:57:35.146623] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:42.114 [2024-11-18 06:57:35.146630] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:42.114 [2024-11-18 06:57:35.146636] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:42.115 [2024-11-18 06:57:35.146643] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:42.115 [2024-11-18 06:57:35.146649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.115 [2024-11-18 06:57:35.146657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:42.115 [2024-11-18 06:57:35.146664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:23:42.115 [2024-11-18 06:57:35.146671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.115 [2024-11-18 06:57:35.157651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.115 [2024-11-18 06:57:35.157675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:42.115 [2024-11-18 06:57:35.157684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.932 ms 00:23:42.115 [2024-11-18 06:57:35.157691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.115 [2024-11-18 06:57:35.157755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.115 [2024-11-18 06:57:35.157767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:42.115 [2024-11-18 06:57:35.157773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:23:42.115 [2024-11-18 06:57:35.157779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.115 [2024-11-18 06:57:35.184606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.115 [2024-11-18 06:57:35.184696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:42.115 [2024-11-18 06:57:35.184751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.785 ms 00:23:42.115 [2024-11-18 06:57:35.184775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.115 [2024-11-18 06:57:35.184875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.115 [2024-11-18 06:57:35.184905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:42.115 [2024-11-18 06:57:35.184928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:42.115 [2024-11-18 06:57:35.184947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.115 [2024-11-18 06:57:35.185657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.115 [2024-11-18 06:57:35.185733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:42.115 [2024-11-18 06:57:35.185758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:23:42.115 [2024-11-18 06:57:35.185779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.115 [2024-11-18 06:57:35.186122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.115 [2024-11-18 06:57:35.186154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:42.115 [2024-11-18 06:57:35.186176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:23:42.115 [2024-11-18 06:57:35.186196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.115 [2024-11-18 06:57:35.192914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.115 [2024-11-18 06:57:35.192939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:42.115 [2024-11-18 06:57:35.192947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.672 ms 00:23:42.115 [2024-11-18 06:57:35.192953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.374 [2024-11-18 06:57:35.195434] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:42.374 [2024-11-18 06:57:35.195570] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:42.374 [2024-11-18 06:57:35.195583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.374 [2024-11-18 06:57:35.195589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:42.374 [2024-11-18 06:57:35.195596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.554 ms 00:23:42.374 [2024-11-18 06:57:35.195602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.374 [2024-11-18 06:57:35.207008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.374 [2024-11-18 06:57:35.207035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:42.374 [2024-11-18 06:57:35.207044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.378 ms 00:23:42.374 [2024-11-18 06:57:35.207050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.374 [2024-11-18 06:57:35.208663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.374 [2024-11-18 06:57:35.208688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:42.374 [2024-11-18 06:57:35.208695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.580 ms 00:23:42.374 [2024-11-18 06:57:35.208700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.374 [2024-11-18 06:57:35.210019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.374 [2024-11-18 06:57:35.210116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:42.375 [2024-11-18 06:57:35.210127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.294 ms 00:23:42.375 [2024-11-18 06:57:35.210133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.375 [2024-11-18 06:57:35.210373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.375 [2024-11-18 06:57:35.210391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:42.375 [2024-11-18 06:57:35.210399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:23:42.375 [2024-11-18 06:57:35.210405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.375 [2024-11-18 06:57:35.228292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.375 [2024-11-18 06:57:35.228325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:42.375 [2024-11-18 06:57:35.228334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.873 ms 00:23:42.375 [2024-11-18 06:57:35.228345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.375 [2024-11-18 06:57:35.234406] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:42.375 [2024-11-18 06:57:35.236842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.375 [2024-11-18 06:57:35.236866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:42.375 [2024-11-18 06:57:35.236876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.417 ms 00:23:42.375 [2024-11-18 06:57:35.236883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.375 [2024-11-18 06:57:35.236951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.375 [2024-11-18 06:57:35.236962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:42.375 [2024-11-18 06:57:35.236971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:42.375 [2024-11-18 06:57:35.236996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.375 [2024-11-18 06:57:35.237056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.375 [2024-11-18 06:57:35.237065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:42.375 [2024-11-18 06:57:35.237073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:23:42.375 [2024-11-18 06:57:35.237081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.375 [2024-11-18 06:57:35.237098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.375 [2024-11-18 06:57:35.237105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:42.375 [2024-11-18 06:57:35.237113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:42.375 [2024-11-18 06:57:35.237140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.375 [2024-11-18 06:57:35.237172] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:42.375 [2024-11-18 06:57:35.237181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.375 [2024-11-18 06:57:35.237187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:42.375 [2024-11-18 06:57:35.237194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:42.375 [2024-11-18 06:57:35.237200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.375 [2024-11-18 06:57:35.241088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.375 [2024-11-18 06:57:35.241114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:42.375 [2024-11-18 06:57:35.241127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.875 ms 00:23:42.375 [2024-11-18 06:57:35.241134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.375 [2024-11-18 06:57:35.241192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.375 [2024-11-18 06:57:35.241200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:42.375 [2024-11-18 06:57:35.241210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:23:42.375 [2024-11-18 06:57:35.241217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.375 [2024-11-18 06:57:35.242152] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 108.686 ms, result 0 00:23:43.359  [2024-11-18T06:57:37.416Z] Copying: 25/1024 [MB] (25 MBps) [2024-11-18T06:57:38.364Z] Copying: 45/1024 [MB] (20 MBps) [2024-11-18T06:57:39.309Z] Copying: 62/1024 [MB] (16 MBps) [2024-11-18T06:57:40.697Z] Copying: 87/1024 [MB] (24 MBps) [2024-11-18T06:57:41.270Z] Copying: 106/1024 [MB] (19 MBps) [2024-11-18T06:57:42.657Z] Copying: 122/1024 [MB] (15 MBps) [2024-11-18T06:57:43.601Z] Copying: 136/1024 [MB] (14 MBps) [2024-11-18T06:57:44.545Z] Copying: 151/1024 [MB] (14 MBps) [2024-11-18T06:57:45.488Z] Copying: 163/1024 [MB] (12 MBps) [2024-11-18T06:57:46.433Z] Copying: 184/1024 [MB] (21 MBps) [2024-11-18T06:57:47.377Z] Copying: 195/1024 [MB] (10 MBps) [2024-11-18T06:57:48.319Z] Copying: 209968/1048576 [kB] (10176 kBps) [2024-11-18T06:57:49.262Z] Copying: 215/1024 [MB] (10 MBps) [2024-11-18T06:57:50.649Z] Copying: 225/1024 [MB] (10 MBps) [2024-11-18T06:57:51.591Z] Copying: 236/1024 [MB] (10 MBps) [2024-11-18T06:57:52.534Z] Copying: 246/1024 [MB] (10 MBps) [2024-11-18T06:57:53.479Z] Copying: 256/1024 [MB] (10 MBps) [2024-11-18T06:57:54.422Z] Copying: 267/1024 [MB] (10 MBps) [2024-11-18T06:57:55.365Z] Copying: 277/1024 [MB] (10 MBps) [2024-11-18T06:57:56.308Z] Copying: 292/1024 [MB] (15 MBps) [2024-11-18T06:57:57.695Z] Copying: 341/1024 [MB] (49 MBps) [2024-11-18T06:57:58.267Z] Copying: 358/1024 [MB] (16 MBps) [2024-11-18T06:57:59.656Z] Copying: 370/1024 [MB] (12 MBps) [2024-11-18T06:58:00.601Z] Copying: 383/1024 [MB] (13 MBps) [2024-11-18T06:58:01.546Z] Copying: 394/1024 [MB] (10 MBps) [2024-11-18T06:58:02.494Z] Copying: 405/1024 [MB] (11 MBps) [2024-11-18T06:58:03.439Z] Copying: 420/1024 [MB] (14 MBps) [2024-11-18T06:58:04.385Z] Copying: 430/1024 [MB] (10 MBps) [2024-11-18T06:58:05.331Z] Copying: 440/1024 [MB] (10 MBps) [2024-11-18T06:58:06.354Z] Copying: 451/1024 [MB] (10 MBps) [2024-11-18T06:58:07.300Z] Copying: 461/1024 [MB] (10 MBps) [2024-11-18T06:58:08.689Z] Copying: 479/1024 [MB] (18 MBps) [2024-11-18T06:58:09.262Z] Copying: 496/1024 [MB] (16 MBps) [2024-11-18T06:58:10.648Z] Copying: 510/1024 [MB] (14 MBps) [2024-11-18T06:58:11.591Z] Copying: 523/1024 [MB] (13 MBps) [2024-11-18T06:58:12.535Z] Copying: 536/1024 [MB] (13 MBps) [2024-11-18T06:58:13.481Z] Copying: 547/1024 [MB] (10 MBps) [2024-11-18T06:58:14.425Z] Copying: 557/1024 [MB] (10 MBps) [2024-11-18T06:58:15.368Z] Copying: 567/1024 [MB] (10 MBps) [2024-11-18T06:58:16.313Z] Copying: 592/1024 [MB] (24 MBps) [2024-11-18T06:58:17.701Z] Copying: 603/1024 [MB] (10 MBps) [2024-11-18T06:58:18.275Z] Copying: 617/1024 [MB] (14 MBps) [2024-11-18T06:58:19.663Z] Copying: 630/1024 [MB] (12 MBps) [2024-11-18T06:58:20.608Z] Copying: 646/1024 [MB] (15 MBps) [2024-11-18T06:58:21.550Z] Copying: 664/1024 [MB] (17 MBps) [2024-11-18T06:58:22.496Z] Copying: 683/1024 [MB] (19 MBps) [2024-11-18T06:58:23.438Z] Copying: 698/1024 [MB] (15 MBps) [2024-11-18T06:58:24.388Z] Copying: 712/1024 [MB] (13 MBps) [2024-11-18T06:58:25.332Z] Copying: 727/1024 [MB] (15 MBps) [2024-11-18T06:58:26.274Z] Copying: 743/1024 [MB] (15 MBps) [2024-11-18T06:58:27.648Z] Copying: 756/1024 [MB] (13 MBps) [2024-11-18T06:58:28.584Z] Copying: 779/1024 [MB] (23 MBps) [2024-11-18T06:58:29.518Z] Copying: 797/1024 [MB] (17 MBps) [2024-11-18T06:58:30.450Z] Copying: 813/1024 [MB] (16 MBps) [2024-11-18T06:58:31.394Z] Copying: 836/1024 [MB] (22 MBps) [2024-11-18T06:58:32.338Z] Copying: 850/1024 [MB] (14 MBps) [2024-11-18T06:58:33.277Z] Copying: 860/1024 [MB] (10 MBps) [2024-11-18T06:58:34.668Z] Copying: 871/1024 [MB] (10 MBps) [2024-11-18T06:58:35.620Z] Copying: 883/1024 [MB] (11 MBps) [2024-11-18T06:58:36.558Z] Copying: 894/1024 [MB] (11 MBps) [2024-11-18T06:58:37.493Z] Copying: 905/1024 [MB] (10 MBps) [2024-11-18T06:58:38.432Z] Copying: 917/1024 [MB] (11 MBps) [2024-11-18T06:58:39.367Z] Copying: 927/1024 [MB] (10 MBps) [2024-11-18T06:58:40.302Z] Copying: 938/1024 [MB] (11 MBps) [2024-11-18T06:58:41.677Z] Copying: 950/1024 [MB] (11 MBps) [2024-11-18T06:58:42.615Z] Copying: 961/1024 [MB] (11 MBps) [2024-11-18T06:58:43.553Z] Copying: 972/1024 [MB] (11 MBps) [2024-11-18T06:58:44.487Z] Copying: 982/1024 [MB] (10 MBps) [2024-11-18T06:58:45.423Z] Copying: 994/1024 [MB] (11 MBps) [2024-11-18T06:58:46.358Z] Copying: 1005/1024 [MB] (11 MBps) [2024-11-18T06:58:47.298Z] Copying: 1017/1024 [MB] (12 MBps) [2024-11-18T06:58:47.558Z] Copying: 1048248/1048576 [kB] (5920 kBps) [2024-11-18T06:58:47.558Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-18 06:58:47.542158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.471 [2024-11-18 06:58:47.542257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:54.471 [2024-11-18 06:58:47.542277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:54.471 [2024-11-18 06:58:47.542290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.471 [2024-11-18 06:58:47.542635] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:54.471 [2024-11-18 06:58:47.547225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.471 [2024-11-18 06:58:47.547286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:54.471 [2024-11-18 06:58:47.547301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.547 ms 00:24:54.471 [2024-11-18 06:58:47.547312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.733 [2024-11-18 06:58:47.557947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.733 [2024-11-18 06:58:47.558244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:54.733 [2024-11-18 06:58:47.558269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.815 ms 00:24:54.733 [2024-11-18 06:58:47.558281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.733 [2024-11-18 06:58:47.584404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.733 [2024-11-18 06:58:47.584471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:54.733 [2024-11-18 06:58:47.584484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.097 ms 00:24:54.733 [2024-11-18 06:58:47.584492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.733 [2024-11-18 06:58:47.590738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.733 [2024-11-18 06:58:47.590784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:54.733 [2024-11-18 06:58:47.590797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.212 ms 00:24:54.733 [2024-11-18 06:58:47.590807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.733 [2024-11-18 06:58:47.593878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.733 [2024-11-18 06:58:47.593936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:54.733 [2024-11-18 06:58:47.593948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.999 ms 00:24:54.733 [2024-11-18 06:58:47.593957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.733 [2024-11-18 06:58:47.599613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.733 [2024-11-18 06:58:47.599679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:54.733 [2024-11-18 06:58:47.599691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.590 ms 00:24:54.733 [2024-11-18 06:58:47.599702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.996 [2024-11-18 06:58:47.878623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.996 [2024-11-18 06:58:47.878682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:54.996 [2024-11-18 06:58:47.878723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 278.848 ms 00:24:54.996 [2024-11-18 06:58:47.878733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.996 [2024-11-18 06:58:47.882329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.996 [2024-11-18 06:58:47.882554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:54.996 [2024-11-18 06:58:47.882575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.574 ms 00:24:54.996 [2024-11-18 06:58:47.882583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.996 [2024-11-18 06:58:47.885731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.996 [2024-11-18 06:58:47.885931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:54.996 [2024-11-18 06:58:47.885953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.057 ms 00:24:54.996 [2024-11-18 06:58:47.885961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.996 [2024-11-18 06:58:47.888418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.996 [2024-11-18 06:58:47.888472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:54.996 [2024-11-18 06:58:47.888484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.361 ms 00:24:54.996 [2024-11-18 06:58:47.888492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.996 [2024-11-18 06:58:47.891164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.996 [2024-11-18 06:58:47.891219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:54.996 [2024-11-18 06:58:47.891230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.593 ms 00:24:54.996 [2024-11-18 06:58:47.891240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.996 [2024-11-18 06:58:47.891287] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:54.996 [2024-11-18 06:58:47.891305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 99584 / 261120 wr_cnt: 1 state: open 00:24:54.996 [2024-11-18 06:58:47.891327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:54.996 [2024-11-18 06:58:47.891665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.891971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.892003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.892012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.892020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.892028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.892037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.892047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.892056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.892063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.892071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.892078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.892087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.892094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.892102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.892109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.892117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.892128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.892137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.892170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.892179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.892187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:54.997 [2024-11-18 06:58:47.892204] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:54.997 [2024-11-18 06:58:47.892219] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 77747828-95d0-4e37-a793-0f30a108e4e5 00:24:54.997 [2024-11-18 06:58:47.892229] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 99584 00:24:54.997 [2024-11-18 06:58:47.892238] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 100544 00:24:54.997 [2024-11-18 06:58:47.892246] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 99584 00:24:54.997 [2024-11-18 06:58:47.892254] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0096 00:24:54.997 [2024-11-18 06:58:47.892264] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:54.997 [2024-11-18 06:58:47.892281] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:54.997 [2024-11-18 06:58:47.892290] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:54.997 [2024-11-18 06:58:47.892297] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:54.997 [2024-11-18 06:58:47.892304] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:54.997 [2024-11-18 06:58:47.892321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.997 [2024-11-18 06:58:47.892330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:54.997 [2024-11-18 06:58:47.892338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.036 ms 00:24:54.997 [2024-11-18 06:58:47.892356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.997 [2024-11-18 06:58:47.895597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.997 [2024-11-18 06:58:47.895808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:54.997 [2024-11-18 06:58:47.895831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.206 ms 00:24:54.997 [2024-11-18 06:58:47.895841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.997 [2024-11-18 06:58:47.896031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.997 [2024-11-18 06:58:47.896076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:54.997 [2024-11-18 06:58:47.896086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:24:54.997 [2024-11-18 06:58:47.896095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.997 [2024-11-18 06:58:47.906601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.997 [2024-11-18 06:58:47.906816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:54.997 [2024-11-18 06:58:47.906838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.997 [2024-11-18 06:58:47.906857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.997 [2024-11-18 06:58:47.906926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.997 [2024-11-18 06:58:47.906941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:54.997 [2024-11-18 06:58:47.906951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.997 [2024-11-18 06:58:47.906960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.997 [2024-11-18 06:58:47.907079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.997 [2024-11-18 06:58:47.907092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:54.997 [2024-11-18 06:58:47.907101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.997 [2024-11-18 06:58:47.907110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.998 [2024-11-18 06:58:47.907129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.998 [2024-11-18 06:58:47.907138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:54.998 [2024-11-18 06:58:47.907154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.998 [2024-11-18 06:58:47.907163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.998 [2024-11-18 06:58:47.926213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.998 [2024-11-18 06:58:47.926282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:54.998 [2024-11-18 06:58:47.926299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.998 [2024-11-18 06:58:47.926308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.998 [2024-11-18 06:58:47.941245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.998 [2024-11-18 06:58:47.941495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:54.998 [2024-11-18 06:58:47.941515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.998 [2024-11-18 06:58:47.941532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.998 [2024-11-18 06:58:47.941594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.998 [2024-11-18 06:58:47.941605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:54.998 [2024-11-18 06:58:47.941614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.998 [2024-11-18 06:58:47.941623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.998 [2024-11-18 06:58:47.941669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.998 [2024-11-18 06:58:47.941679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:54.998 [2024-11-18 06:58:47.941688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.998 [2024-11-18 06:58:47.941702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.998 [2024-11-18 06:58:47.941788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.998 [2024-11-18 06:58:47.941800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:54.998 [2024-11-18 06:58:47.941814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.998 [2024-11-18 06:58:47.941823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.998 [2024-11-18 06:58:47.941855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.998 [2024-11-18 06:58:47.941865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:54.998 [2024-11-18 06:58:47.941874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.998 [2024-11-18 06:58:47.941887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.998 [2024-11-18 06:58:47.941939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.998 [2024-11-18 06:58:47.941949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:54.998 [2024-11-18 06:58:47.941958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.998 [2024-11-18 06:58:47.941967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.998 [2024-11-18 06:58:47.942047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.998 [2024-11-18 06:58:47.942060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:54.998 [2024-11-18 06:58:47.942070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.998 [2024-11-18 06:58:47.942085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.998 [2024-11-18 06:58:47.942252] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 402.766 ms, result 0 00:24:55.570 00:24:55.570 00:24:55.570 06:58:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:58.119 06:58:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:58.119 [2024-11-18 06:58:50.895908] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:58.119 [2024-11-18 06:58:50.896068] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90466 ] 00:24:58.119 [2024-11-18 06:58:51.056336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.119 [2024-11-18 06:58:51.097413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.382 [2024-11-18 06:58:51.248469] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:58.382 [2024-11-18 06:58:51.248564] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:58.382 [2024-11-18 06:58:51.413325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.382 [2024-11-18 06:58:51.413390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:58.382 [2024-11-18 06:58:51.413407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:58.382 [2024-11-18 06:58:51.413417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.382 [2024-11-18 06:58:51.413492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.382 [2024-11-18 06:58:51.413509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:58.382 [2024-11-18 06:58:51.413518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:24:58.382 [2024-11-18 06:58:51.413527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.382 [2024-11-18 06:58:51.413553] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:58.382 [2024-11-18 06:58:51.413841] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:58.382 [2024-11-18 06:58:51.413860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.382 [2024-11-18 06:58:51.413876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:58.382 [2024-11-18 06:58:51.413885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:24:58.382 [2024-11-18 06:58:51.413903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.382 [2024-11-18 06:58:51.416250] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:58.382 [2024-11-18 06:58:51.421102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.382 [2024-11-18 06:58:51.421402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:58.382 [2024-11-18 06:58:51.421424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.854 ms 00:24:58.382 [2024-11-18 06:58:51.421450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.382 [2024-11-18 06:58:51.421653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.382 [2024-11-18 06:58:51.421691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:58.382 [2024-11-18 06:58:51.421703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:24:58.382 [2024-11-18 06:58:51.421716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.382 [2024-11-18 06:58:51.433429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.382 [2024-11-18 06:58:51.433478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:58.382 [2024-11-18 06:58:51.433500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.659 ms 00:24:58.382 [2024-11-18 06:58:51.433508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.382 [2024-11-18 06:58:51.433617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.382 [2024-11-18 06:58:51.433627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:58.382 [2024-11-18 06:58:51.433640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:24:58.382 [2024-11-18 06:58:51.433654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.382 [2024-11-18 06:58:51.433720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.382 [2024-11-18 06:58:51.433739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:58.382 [2024-11-18 06:58:51.433748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:58.382 [2024-11-18 06:58:51.433758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.382 [2024-11-18 06:58:51.433783] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:58.382 [2024-11-18 06:58:51.436602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.382 [2024-11-18 06:58:51.436842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:58.383 [2024-11-18 06:58:51.436861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.825 ms 00:24:58.383 [2024-11-18 06:58:51.436883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.383 [2024-11-18 06:58:51.436929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.383 [2024-11-18 06:58:51.436940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:58.383 [2024-11-18 06:58:51.436954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:58.383 [2024-11-18 06:58:51.436964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.383 [2024-11-18 06:58:51.437014] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:58.383 [2024-11-18 06:58:51.437044] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:58.383 [2024-11-18 06:58:51.437086] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:58.383 [2024-11-18 06:58:51.437103] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:58.383 [2024-11-18 06:58:51.437215] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:58.383 [2024-11-18 06:58:51.437230] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:58.383 [2024-11-18 06:58:51.437243] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:58.383 [2024-11-18 06:58:51.437260] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:58.383 [2024-11-18 06:58:51.437275] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:58.383 [2024-11-18 06:58:51.437289] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:58.383 [2024-11-18 06:58:51.437297] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:58.383 [2024-11-18 06:58:51.437308] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:58.383 [2024-11-18 06:58:51.437317] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:58.383 [2024-11-18 06:58:51.437327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.383 [2024-11-18 06:58:51.437338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:58.383 [2024-11-18 06:58:51.437348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:24:58.383 [2024-11-18 06:58:51.437361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.383 [2024-11-18 06:58:51.437452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.383 [2024-11-18 06:58:51.437466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:58.383 [2024-11-18 06:58:51.437480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:24:58.383 [2024-11-18 06:58:51.437489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.383 [2024-11-18 06:58:51.437587] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:58.383 [2024-11-18 06:58:51.437600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:58.383 [2024-11-18 06:58:51.437614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:58.383 [2024-11-18 06:58:51.437623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:58.383 [2024-11-18 06:58:51.437631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:58.383 [2024-11-18 06:58:51.437647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:58.383 [2024-11-18 06:58:51.437657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:58.383 [2024-11-18 06:58:51.437664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:58.383 [2024-11-18 06:58:51.437673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:58.383 [2024-11-18 06:58:51.437689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:58.383 [2024-11-18 06:58:51.437698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:58.383 [2024-11-18 06:58:51.437704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:58.383 [2024-11-18 06:58:51.437711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:58.383 [2024-11-18 06:58:51.437718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:58.383 [2024-11-18 06:58:51.437726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:58.383 [2024-11-18 06:58:51.437734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:58.383 [2024-11-18 06:58:51.437742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:58.383 [2024-11-18 06:58:51.437751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:58.383 [2024-11-18 06:58:51.437760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:58.383 [2024-11-18 06:58:51.437767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:58.383 [2024-11-18 06:58:51.437774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:58.383 [2024-11-18 06:58:51.437782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:58.383 [2024-11-18 06:58:51.437790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:58.383 [2024-11-18 06:58:51.437797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:58.383 [2024-11-18 06:58:51.437806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:58.383 [2024-11-18 06:58:51.437816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:58.383 [2024-11-18 06:58:51.437823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:58.383 [2024-11-18 06:58:51.437830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:58.383 [2024-11-18 06:58:51.437837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:58.383 [2024-11-18 06:58:51.437844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:58.383 [2024-11-18 06:58:51.437851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:58.383 [2024-11-18 06:58:51.437859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:58.383 [2024-11-18 06:58:51.437867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:58.383 [2024-11-18 06:58:51.437873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:58.383 [2024-11-18 06:58:51.437880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:58.383 [2024-11-18 06:58:51.437887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:58.383 [2024-11-18 06:58:51.437894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:58.383 [2024-11-18 06:58:51.437901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:58.383 [2024-11-18 06:58:51.437908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:58.383 [2024-11-18 06:58:51.437917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:58.383 [2024-11-18 06:58:51.437923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:58.383 [2024-11-18 06:58:51.437933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:58.383 [2024-11-18 06:58:51.437939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:58.383 [2024-11-18 06:58:51.437947] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:58.383 [2024-11-18 06:58:51.437956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:58.383 [2024-11-18 06:58:51.437970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:58.383 [2024-11-18 06:58:51.437997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:58.383 [2024-11-18 06:58:51.438008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:58.383 [2024-11-18 06:58:51.438017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:58.383 [2024-11-18 06:58:51.438024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:58.383 [2024-11-18 06:58:51.438031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:58.383 [2024-11-18 06:58:51.438038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:58.384 [2024-11-18 06:58:51.438046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:58.384 [2024-11-18 06:58:51.438057] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:58.384 [2024-11-18 06:58:51.438067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:58.384 [2024-11-18 06:58:51.438076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:58.384 [2024-11-18 06:58:51.438085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:58.384 [2024-11-18 06:58:51.438097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:58.384 [2024-11-18 06:58:51.438107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:58.384 [2024-11-18 06:58:51.438114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:58.384 [2024-11-18 06:58:51.438122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:58.384 [2024-11-18 06:58:51.438141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:58.384 [2024-11-18 06:58:51.438150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:58.384 [2024-11-18 06:58:51.438158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:58.384 [2024-11-18 06:58:51.438166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:58.384 [2024-11-18 06:58:51.438173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:58.384 [2024-11-18 06:58:51.438180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:58.384 [2024-11-18 06:58:51.438186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:58.384 [2024-11-18 06:58:51.438196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:58.384 [2024-11-18 06:58:51.438204] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:58.384 [2024-11-18 06:58:51.438215] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:58.384 [2024-11-18 06:58:51.438223] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:58.384 [2024-11-18 06:58:51.438230] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:58.384 [2024-11-18 06:58:51.438240] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:58.384 [2024-11-18 06:58:51.438248] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:58.384 [2024-11-18 06:58:51.438255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.384 [2024-11-18 06:58:51.438264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:58.384 [2024-11-18 06:58:51.438273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:24:58.384 [2024-11-18 06:58:51.438280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.384 [2024-11-18 06:58:51.458994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.384 [2024-11-18 06:58:51.459049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:58.384 [2024-11-18 06:58:51.459063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.632 ms 00:24:58.384 [2024-11-18 06:58:51.459072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.384 [2024-11-18 06:58:51.459191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.384 [2024-11-18 06:58:51.459209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:58.384 [2024-11-18 06:58:51.459218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:24:58.384 [2024-11-18 06:58:51.459227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.646 [2024-11-18 06:58:51.489929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.646 [2024-11-18 06:58:51.490073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:58.646 [2024-11-18 06:58:51.490110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.635 ms 00:24:58.646 [2024-11-18 06:58:51.490136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.646 [2024-11-18 06:58:51.490240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.646 [2024-11-18 06:58:51.490268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:58.646 [2024-11-18 06:58:51.490305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:58.646 [2024-11-18 06:58:51.490326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.646 [2024-11-18 06:58:51.491388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.646 [2024-11-18 06:58:51.491632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:58.646 [2024-11-18 06:58:51.491652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.925 ms 00:24:58.646 [2024-11-18 06:58:51.491679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.646 [2024-11-18 06:58:51.491848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.646 [2024-11-18 06:58:51.491862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:58.646 [2024-11-18 06:58:51.491871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:24:58.646 [2024-11-18 06:58:51.491880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.646 [2024-11-18 06:58:51.503180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.646 [2024-11-18 06:58:51.503233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:58.646 [2024-11-18 06:58:51.503254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.275 ms 00:24:58.646 [2024-11-18 06:58:51.503264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.646 [2024-11-18 06:58:51.508271] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:58.646 [2024-11-18 06:58:51.508327] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:58.646 [2024-11-18 06:58:51.508346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.646 [2024-11-18 06:58:51.508356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:58.646 [2024-11-18 06:58:51.508366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.952 ms 00:24:58.646 [2024-11-18 06:58:51.508375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.646 [2024-11-18 06:58:51.525036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.646 [2024-11-18 06:58:51.525085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:58.646 [2024-11-18 06:58:51.525099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.602 ms 00:24:58.646 [2024-11-18 06:58:51.525119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.646 [2024-11-18 06:58:51.528365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.646 [2024-11-18 06:58:51.528415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:58.646 [2024-11-18 06:58:51.528426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.189 ms 00:24:58.646 [2024-11-18 06:58:51.528436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.646 [2024-11-18 06:58:51.531231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.646 [2024-11-18 06:58:51.531280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:58.646 [2024-11-18 06:58:51.531300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.747 ms 00:24:58.646 [2024-11-18 06:58:51.531308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.646 [2024-11-18 06:58:51.531676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.646 [2024-11-18 06:58:51.531691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:58.646 [2024-11-18 06:58:51.531701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:24:58.646 [2024-11-18 06:58:51.531714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.646 [2024-11-18 06:58:51.564535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.646 [2024-11-18 06:58:51.564598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:58.646 [2024-11-18 06:58:51.564623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.793 ms 00:24:58.646 [2024-11-18 06:58:51.564633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.646 [2024-11-18 06:58:51.573479] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:58.646 [2024-11-18 06:58:51.577123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.646 [2024-11-18 06:58:51.577168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:58.646 [2024-11-18 06:58:51.577186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.420 ms 00:24:58.646 [2024-11-18 06:58:51.577196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.646 [2024-11-18 06:58:51.577280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.646 [2024-11-18 06:58:51.577293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:58.646 [2024-11-18 06:58:51.577302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:58.646 [2024-11-18 06:58:51.577311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.646 [2024-11-18 06:58:51.579503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.646 [2024-11-18 06:58:51.579549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:58.646 [2024-11-18 06:58:51.579567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.153 ms 00:24:58.646 [2024-11-18 06:58:51.579576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.646 [2024-11-18 06:58:51.579609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.646 [2024-11-18 06:58:51.579619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:58.646 [2024-11-18 06:58:51.579629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:58.646 [2024-11-18 06:58:51.579638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.646 [2024-11-18 06:58:51.579685] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:58.646 [2024-11-18 06:58:51.579697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.646 [2024-11-18 06:58:51.579713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:58.646 [2024-11-18 06:58:51.579726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:58.646 [2024-11-18 06:58:51.579738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.646 [2024-11-18 06:58:51.586201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.646 [2024-11-18 06:58:51.586452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:58.646 [2024-11-18 06:58:51.586473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.444 ms 00:24:58.646 [2024-11-18 06:58:51.586483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.646 [2024-11-18 06:58:51.586572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.646 [2024-11-18 06:58:51.586584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:58.646 [2024-11-18 06:58:51.586600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:24:58.646 [2024-11-18 06:58:51.586609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.646 [2024-11-18 06:58:51.588142] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 174.234 ms, result 0 00:25:00.035  [2024-11-18T06:58:54.066Z] Copying: 976/1048576 [kB] (976 kBps) [2024-11-18T06:58:55.010Z] Copying: 4064/1048576 [kB] (3088 kBps) [2024-11-18T06:58:55.949Z] Copying: 14/1024 [MB] (10 MBps) [2024-11-18T06:58:56.889Z] Copying: 39/1024 [MB] (24 MBps) [2024-11-18T06:58:57.833Z] Copying: 71/1024 [MB] (32 MBps) [2024-11-18T06:58:59.215Z] Copying: 96/1024 [MB] (24 MBps) [2024-11-18T06:59:00.154Z] Copying: 124/1024 [MB] (28 MBps) [2024-11-18T06:59:01.097Z] Copying: 143/1024 [MB] (19 MBps) [2024-11-18T06:59:02.040Z] Copying: 164/1024 [MB] (20 MBps) [2024-11-18T06:59:02.983Z] Copying: 188/1024 [MB] (23 MBps) [2024-11-18T06:59:03.966Z] Copying: 212/1024 [MB] (24 MBps) [2024-11-18T06:59:04.908Z] Copying: 241/1024 [MB] (28 MBps) [2024-11-18T06:59:05.851Z] Copying: 263/1024 [MB] (22 MBps) [2024-11-18T06:59:06.789Z] Copying: 283/1024 [MB] (19 MBps) [2024-11-18T06:59:08.172Z] Copying: 316/1024 [MB] (32 MBps) [2024-11-18T06:59:09.115Z] Copying: 341/1024 [MB] (25 MBps) [2024-11-18T06:59:10.059Z] Copying: 360/1024 [MB] (19 MBps) [2024-11-18T06:59:11.003Z] Copying: 384/1024 [MB] (23 MBps) [2024-11-18T06:59:11.945Z] Copying: 407/1024 [MB] (23 MBps) [2024-11-18T06:59:12.888Z] Copying: 432/1024 [MB] (24 MBps) [2024-11-18T06:59:13.830Z] Copying: 458/1024 [MB] (26 MBps) [2024-11-18T06:59:15.216Z] Copying: 486/1024 [MB] (27 MBps) [2024-11-18T06:59:15.789Z] Copying: 509/1024 [MB] (22 MBps) [2024-11-18T06:59:17.178Z] Copying: 539/1024 [MB] (30 MBps) [2024-11-18T06:59:18.124Z] Copying: 574/1024 [MB] (34 MBps) [2024-11-18T06:59:19.068Z] Copying: 602/1024 [MB] (28 MBps) [2024-11-18T06:59:20.009Z] Copying: 631/1024 [MB] (29 MBps) [2024-11-18T06:59:20.951Z] Copying: 660/1024 [MB] (28 MBps) [2024-11-18T06:59:21.893Z] Copying: 690/1024 [MB] (30 MBps) [2024-11-18T06:59:22.837Z] Copying: 714/1024 [MB] (23 MBps) [2024-11-18T06:59:24.224Z] Copying: 741/1024 [MB] (27 MBps) [2024-11-18T06:59:24.797Z] Copying: 769/1024 [MB] (28 MBps) [2024-11-18T06:59:26.180Z] Copying: 798/1024 [MB] (28 MBps) [2024-11-18T06:59:27.121Z] Copying: 825/1024 [MB] (26 MBps) [2024-11-18T06:59:28.061Z] Copying: 851/1024 [MB] (25 MBps) [2024-11-18T06:59:29.001Z] Copying: 879/1024 [MB] (28 MBps) [2024-11-18T06:59:29.939Z] Copying: 908/1024 [MB] (28 MBps) [2024-11-18T06:59:30.878Z] Copying: 938/1024 [MB] (29 MBps) [2024-11-18T06:59:31.820Z] Copying: 967/1024 [MB] (29 MBps) [2024-11-18T06:59:32.817Z] Copying: 997/1024 [MB] (30 MBps) [2024-11-18T06:59:33.089Z] Copying: 1021/1024 [MB] (23 MBps) [2024-11-18T06:59:33.351Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-18 06:59:33.133678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.264 [2024-11-18 06:59:33.133780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:40.264 [2024-11-18 06:59:33.133798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:40.264 [2024-11-18 06:59:33.133808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.264 [2024-11-18 06:59:33.133835] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:40.264 [2024-11-18 06:59:33.135131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.264 [2024-11-18 06:59:33.135186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:40.264 [2024-11-18 06:59:33.135198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.276 ms 00:25:40.264 [2024-11-18 06:59:33.135207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.264 [2024-11-18 06:59:33.135481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.265 [2024-11-18 06:59:33.135493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:40.265 [2024-11-18 06:59:33.135504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:25:40.265 [2024-11-18 06:59:33.135515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.265 [2024-11-18 06:59:33.149705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.265 [2024-11-18 06:59:33.149762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:40.265 [2024-11-18 06:59:33.149776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.169 ms 00:25:40.265 [2024-11-18 06:59:33.149793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.265 [2024-11-18 06:59:33.156059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.265 [2024-11-18 06:59:33.156267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:40.265 [2024-11-18 06:59:33.156289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.224 ms 00:25:40.265 [2024-11-18 06:59:33.156299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.265 [2024-11-18 06:59:33.159192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.265 [2024-11-18 06:59:33.159244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:40.265 [2024-11-18 06:59:33.159255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.835 ms 00:25:40.265 [2024-11-18 06:59:33.159263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.265 [2024-11-18 06:59:33.164741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.265 [2024-11-18 06:59:33.164797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:40.265 [2024-11-18 06:59:33.164819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.430 ms 00:25:40.265 [2024-11-18 06:59:33.164828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.265 [2024-11-18 06:59:33.170118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.265 [2024-11-18 06:59:33.170289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:40.265 [2024-11-18 06:59:33.170310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.240 ms 00:25:40.265 [2024-11-18 06:59:33.170320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.265 [2024-11-18 06:59:33.173528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.265 [2024-11-18 06:59:33.173712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:40.265 [2024-11-18 06:59:33.173729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.185 ms 00:25:40.265 [2024-11-18 06:59:33.173736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.265 [2024-11-18 06:59:33.176703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.265 [2024-11-18 06:59:33.176761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:40.265 [2024-11-18 06:59:33.176771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.855 ms 00:25:40.265 [2024-11-18 06:59:33.176778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.265 [2024-11-18 06:59:33.178894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.265 [2024-11-18 06:59:33.179079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:40.265 [2024-11-18 06:59:33.179099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.071 ms 00:25:40.265 [2024-11-18 06:59:33.179107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.265 [2024-11-18 06:59:33.181300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.265 [2024-11-18 06:59:33.181440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:40.265 [2024-11-18 06:59:33.181497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.044 ms 00:25:40.265 [2024-11-18 06:59:33.181521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.265 [2024-11-18 06:59:33.181653] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:40.265 [2024-11-18 06:59:33.181755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:40.265 [2024-11-18 06:59:33.181847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:40.265 [2024-11-18 06:59:33.181879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.181908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.181965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.182013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.182044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.182094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.182125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.182156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.182185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.182213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.182242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.182270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.182299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.182329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.182412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.182443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.182473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.182502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.182531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.182559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.182745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.183362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.183444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.183618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.183719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.183752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.183793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.183804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.183811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.183819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.183827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.183835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.183843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.183851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.183859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.183867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.183875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.183883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.183890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.183908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.183915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.183923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.183930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.183939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.183947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.183954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.183962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.183969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.184000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.184009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.184017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.184025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:40.265 [2024-11-18 06:59:33.184034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:40.266 [2024-11-18 06:59:33.184405] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:40.266 [2024-11-18 06:59:33.184414] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 77747828-95d0-4e37-a793-0f30a108e4e5 00:25:40.266 [2024-11-18 06:59:33.184431] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:40.266 [2024-11-18 06:59:33.184439] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 165056 00:25:40.266 [2024-11-18 06:59:33.184447] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 163072 00:25:40.266 [2024-11-18 06:59:33.184457] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0122 00:25:40.266 [2024-11-18 06:59:33.184465] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:40.266 [2024-11-18 06:59:33.184474] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:40.266 [2024-11-18 06:59:33.184482] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:40.266 [2024-11-18 06:59:33.184489] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:40.266 [2024-11-18 06:59:33.184496] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:40.266 [2024-11-18 06:59:33.184506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.266 [2024-11-18 06:59:33.184528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:40.266 [2024-11-18 06:59:33.184538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.857 ms 00:25:40.266 [2024-11-18 06:59:33.184547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.266 [2024-11-18 06:59:33.187129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.266 [2024-11-18 06:59:33.187164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:40.266 [2024-11-18 06:59:33.187175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.548 ms 00:25:40.266 [2024-11-18 06:59:33.187184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.266 [2024-11-18 06:59:33.187317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.266 [2024-11-18 06:59:33.187326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:40.266 [2024-11-18 06:59:33.187344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:25:40.266 [2024-11-18 06:59:33.187358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.266 [2024-11-18 06:59:33.195049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.266 [2024-11-18 06:59:33.195097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:40.266 [2024-11-18 06:59:33.195108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.266 [2024-11-18 06:59:33.195116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.266 [2024-11-18 06:59:33.195174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.266 [2024-11-18 06:59:33.195184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:40.266 [2024-11-18 06:59:33.195205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.266 [2024-11-18 06:59:33.195213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.266 [2024-11-18 06:59:33.195278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.266 [2024-11-18 06:59:33.195290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:40.266 [2024-11-18 06:59:33.195299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.266 [2024-11-18 06:59:33.195307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.266 [2024-11-18 06:59:33.195323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.266 [2024-11-18 06:59:33.195331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:40.266 [2024-11-18 06:59:33.195339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.266 [2024-11-18 06:59:33.195350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.266 [2024-11-18 06:59:33.209180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.266 [2024-11-18 06:59:33.209393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:40.266 [2024-11-18 06:59:33.209411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.266 [2024-11-18 06:59:33.209420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.266 [2024-11-18 06:59:33.219926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.266 [2024-11-18 06:59:33.219997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:40.266 [2024-11-18 06:59:33.220009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.266 [2024-11-18 06:59:33.220026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.266 [2024-11-18 06:59:33.220083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.266 [2024-11-18 06:59:33.220093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:40.266 [2024-11-18 06:59:33.220101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.266 [2024-11-18 06:59:33.220133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.266 [2024-11-18 06:59:33.220168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.266 [2024-11-18 06:59:33.220178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:40.266 [2024-11-18 06:59:33.220187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.267 [2024-11-18 06:59:33.220195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.267 [2024-11-18 06:59:33.220271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.267 [2024-11-18 06:59:33.220281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:40.267 [2024-11-18 06:59:33.220290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.267 [2024-11-18 06:59:33.220299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.267 [2024-11-18 06:59:33.220328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.267 [2024-11-18 06:59:33.220337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:40.267 [2024-11-18 06:59:33.220345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.267 [2024-11-18 06:59:33.220353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.267 [2024-11-18 06:59:33.220399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.267 [2024-11-18 06:59:33.220409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:40.267 [2024-11-18 06:59:33.220417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.267 [2024-11-18 06:59:33.220425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.267 [2024-11-18 06:59:33.220471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.267 [2024-11-18 06:59:33.220482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:40.267 [2024-11-18 06:59:33.220490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.267 [2024-11-18 06:59:33.220499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.267 [2024-11-18 06:59:33.220635] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 86.925 ms, result 0 00:25:40.527 00:25:40.527 00:25:40.527 06:59:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:43.073 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:43.073 06:59:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:43.073 [2024-11-18 06:59:35.738366] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:25:43.073 [2024-11-18 06:59:35.738514] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90920 ] 00:25:43.073 [2024-11-18 06:59:35.900871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.073 [2024-11-18 06:59:35.929908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.073 [2024-11-18 06:59:36.040837] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:43.073 [2024-11-18 06:59:36.040918] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:43.335 [2024-11-18 06:59:36.204506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.335 [2024-11-18 06:59:36.204575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:43.335 [2024-11-18 06:59:36.204591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:43.335 [2024-11-18 06:59:36.204600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.335 [2024-11-18 06:59:36.204664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.335 [2024-11-18 06:59:36.204675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:43.335 [2024-11-18 06:59:36.204684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:25:43.335 [2024-11-18 06:59:36.204696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.335 [2024-11-18 06:59:36.204726] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:43.335 [2024-11-18 06:59:36.205035] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:43.335 [2024-11-18 06:59:36.205079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.335 [2024-11-18 06:59:36.205088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:43.335 [2024-11-18 06:59:36.205098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:25:43.335 [2024-11-18 06:59:36.205109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.335 [2024-11-18 06:59:36.207013] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:43.335 [2024-11-18 06:59:36.211002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.335 [2024-11-18 06:59:36.211063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:43.335 [2024-11-18 06:59:36.211080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.993 ms 00:25:43.335 [2024-11-18 06:59:36.211101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.335 [2024-11-18 06:59:36.211187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.335 [2024-11-18 06:59:36.211198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:43.335 [2024-11-18 06:59:36.211207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:43.335 [2024-11-18 06:59:36.211215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.335 [2024-11-18 06:59:36.219657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.335 [2024-11-18 06:59:36.219709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:43.335 [2024-11-18 06:59:36.219735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.398 ms 00:25:43.335 [2024-11-18 06:59:36.219743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.335 [2024-11-18 06:59:36.219855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.336 [2024-11-18 06:59:36.219866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:43.336 [2024-11-18 06:59:36.219878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:25:43.336 [2024-11-18 06:59:36.219885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.336 [2024-11-18 06:59:36.219949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.336 [2024-11-18 06:59:36.219959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:43.336 [2024-11-18 06:59:36.219968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:43.336 [2024-11-18 06:59:36.220030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.336 [2024-11-18 06:59:36.220076] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:43.336 [2024-11-18 06:59:36.222173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.336 [2024-11-18 06:59:36.222218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:43.336 [2024-11-18 06:59:36.222233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.111 ms 00:25:43.336 [2024-11-18 06:59:36.222241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.336 [2024-11-18 06:59:36.222286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.336 [2024-11-18 06:59:36.222295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:43.336 [2024-11-18 06:59:36.222307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:43.336 [2024-11-18 06:59:36.222315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.336 [2024-11-18 06:59:36.222338] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:43.336 [2024-11-18 06:59:36.222359] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:43.336 [2024-11-18 06:59:36.222396] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:43.336 [2024-11-18 06:59:36.222413] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:43.336 [2024-11-18 06:59:36.222520] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:43.336 [2024-11-18 06:59:36.222532] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:43.336 [2024-11-18 06:59:36.222547] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:43.336 [2024-11-18 06:59:36.222561] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:43.336 [2024-11-18 06:59:36.222572] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:43.336 [2024-11-18 06:59:36.222584] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:43.336 [2024-11-18 06:59:36.222592] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:43.336 [2024-11-18 06:59:36.222601] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:43.336 [2024-11-18 06:59:36.222609] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:43.336 [2024-11-18 06:59:36.222618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.336 [2024-11-18 06:59:36.222629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:43.336 [2024-11-18 06:59:36.222638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:25:43.336 [2024-11-18 06:59:36.222645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.336 [2024-11-18 06:59:36.222749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.336 [2024-11-18 06:59:36.222763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:43.336 [2024-11-18 06:59:36.222772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:25:43.336 [2024-11-18 06:59:36.222779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.336 [2024-11-18 06:59:36.222877] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:43.336 [2024-11-18 06:59:36.222889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:43.336 [2024-11-18 06:59:36.222898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:43.336 [2024-11-18 06:59:36.222907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.336 [2024-11-18 06:59:36.222917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:43.336 [2024-11-18 06:59:36.222932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:43.336 [2024-11-18 06:59:36.222941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:43.336 [2024-11-18 06:59:36.222951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:43.336 [2024-11-18 06:59:36.222960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:43.336 [2024-11-18 06:59:36.222972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:43.336 [2024-11-18 06:59:36.223007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:43.336 [2024-11-18 06:59:36.223015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:43.336 [2024-11-18 06:59:36.223024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:43.336 [2024-11-18 06:59:36.223032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:43.336 [2024-11-18 06:59:36.223041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:43.336 [2024-11-18 06:59:36.223049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.336 [2024-11-18 06:59:36.223057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:43.336 [2024-11-18 06:59:36.223065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:43.336 [2024-11-18 06:59:36.223072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.336 [2024-11-18 06:59:36.223080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:43.336 [2024-11-18 06:59:36.223089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:43.336 [2024-11-18 06:59:36.223097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:43.336 [2024-11-18 06:59:36.223105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:43.336 [2024-11-18 06:59:36.223113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:43.336 [2024-11-18 06:59:36.223121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:43.336 [2024-11-18 06:59:36.223135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:43.336 [2024-11-18 06:59:36.223144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:43.336 [2024-11-18 06:59:36.223152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:43.336 [2024-11-18 06:59:36.223160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:43.336 [2024-11-18 06:59:36.223167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:43.336 [2024-11-18 06:59:36.223175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:43.336 [2024-11-18 06:59:36.223182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:43.336 [2024-11-18 06:59:36.223190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:43.336 [2024-11-18 06:59:36.223197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:43.336 [2024-11-18 06:59:36.223205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:43.336 [2024-11-18 06:59:36.223213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:43.336 [2024-11-18 06:59:36.223221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:43.336 [2024-11-18 06:59:36.223228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:43.336 [2024-11-18 06:59:36.223236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:43.336 [2024-11-18 06:59:36.223243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.336 [2024-11-18 06:59:36.223251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:43.336 [2024-11-18 06:59:36.223262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:43.336 [2024-11-18 06:59:36.223273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.336 [2024-11-18 06:59:36.223282] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:43.336 [2024-11-18 06:59:36.223295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:43.336 [2024-11-18 06:59:36.223307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:43.336 [2024-11-18 06:59:36.223315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.336 [2024-11-18 06:59:36.223327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:43.336 [2024-11-18 06:59:36.223334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:43.336 [2024-11-18 06:59:36.223341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:43.336 [2024-11-18 06:59:36.223349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:43.336 [2024-11-18 06:59:36.223356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:43.336 [2024-11-18 06:59:36.223362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:43.336 [2024-11-18 06:59:36.223371] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:43.336 [2024-11-18 06:59:36.223385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:43.336 [2024-11-18 06:59:36.223394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:43.336 [2024-11-18 06:59:36.223402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:43.336 [2024-11-18 06:59:36.223412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:43.336 [2024-11-18 06:59:36.223427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:43.336 [2024-11-18 06:59:36.223434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:43.337 [2024-11-18 06:59:36.223441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:43.337 [2024-11-18 06:59:36.223448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:43.337 [2024-11-18 06:59:36.223455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:43.337 [2024-11-18 06:59:36.223462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:43.337 [2024-11-18 06:59:36.223470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:43.337 [2024-11-18 06:59:36.223477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:43.337 [2024-11-18 06:59:36.223484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:43.337 [2024-11-18 06:59:36.223491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:43.337 [2024-11-18 06:59:36.223498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:43.337 [2024-11-18 06:59:36.223504] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:43.337 [2024-11-18 06:59:36.223514] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:43.337 [2024-11-18 06:59:36.223526] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:43.337 [2024-11-18 06:59:36.223533] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:43.337 [2024-11-18 06:59:36.223543] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:43.337 [2024-11-18 06:59:36.223552] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:43.337 [2024-11-18 06:59:36.223559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.337 [2024-11-18 06:59:36.223567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:43.337 [2024-11-18 06:59:36.223575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:25:43.337 [2024-11-18 06:59:36.223583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.337 [2024-11-18 06:59:36.238498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.337 [2024-11-18 06:59:36.238553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:43.337 [2024-11-18 06:59:36.238566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.866 ms 00:25:43.337 [2024-11-18 06:59:36.238575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.337 [2024-11-18 06:59:36.238662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.337 [2024-11-18 06:59:36.238671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:43.337 [2024-11-18 06:59:36.238680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:25:43.337 [2024-11-18 06:59:36.238695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.337 [2024-11-18 06:59:36.262794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.337 [2024-11-18 06:59:36.262862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:43.337 [2024-11-18 06:59:36.262881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.015 ms 00:25:43.337 [2024-11-18 06:59:36.262893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.337 [2024-11-18 06:59:36.262954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.337 [2024-11-18 06:59:36.262967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:43.337 [2024-11-18 06:59:36.263036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:43.337 [2024-11-18 06:59:36.263049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.337 [2024-11-18 06:59:36.263680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.337 [2024-11-18 06:59:36.263728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:43.337 [2024-11-18 06:59:36.263744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:25:43.337 [2024-11-18 06:59:36.263755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.337 [2024-11-18 06:59:36.263962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.337 [2024-11-18 06:59:36.264004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:43.337 [2024-11-18 06:59:36.264018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:25:43.337 [2024-11-18 06:59:36.264028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.337 [2024-11-18 06:59:36.272725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.337 [2024-11-18 06:59:36.272941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:43.337 [2024-11-18 06:59:36.273000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.669 ms 00:25:43.337 [2024-11-18 06:59:36.273011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.337 [2024-11-18 06:59:36.277024] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:43.337 [2024-11-18 06:59:36.277070] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:43.337 [2024-11-18 06:59:36.277090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.337 [2024-11-18 06:59:36.277099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:43.337 [2024-11-18 06:59:36.277108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.956 ms 00:25:43.337 [2024-11-18 06:59:36.277115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.337 [2024-11-18 06:59:36.300574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.337 [2024-11-18 06:59:36.300644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:43.337 [2024-11-18 06:59:36.300659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.401 ms 00:25:43.337 [2024-11-18 06:59:36.300668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.337 [2024-11-18 06:59:36.303587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.337 [2024-11-18 06:59:36.303777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:43.337 [2024-11-18 06:59:36.303796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.852 ms 00:25:43.337 [2024-11-18 06:59:36.303804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.337 [2024-11-18 06:59:36.306475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.337 [2024-11-18 06:59:36.306522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:43.337 [2024-11-18 06:59:36.306543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.630 ms 00:25:43.337 [2024-11-18 06:59:36.306551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.337 [2024-11-18 06:59:36.306932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.337 [2024-11-18 06:59:36.306944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:43.337 [2024-11-18 06:59:36.306954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:25:43.337 [2024-11-18 06:59:36.306961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.337 [2024-11-18 06:59:36.330614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.337 [2024-11-18 06:59:36.330828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:43.337 [2024-11-18 06:59:36.330890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.456 ms 00:25:43.337 [2024-11-18 06:59:36.330914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.337 [2024-11-18 06:59:36.338967] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:43.337 [2024-11-18 06:59:36.342000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.337 [2024-11-18 06:59:36.342146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:43.337 [2024-11-18 06:59:36.342163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.032 ms 00:25:43.337 [2024-11-18 06:59:36.342172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.337 [2024-11-18 06:59:36.342255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.337 [2024-11-18 06:59:36.342269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:43.337 [2024-11-18 06:59:36.342278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:43.337 [2024-11-18 06:59:36.342294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.337 [2024-11-18 06:59:36.343117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.337 [2024-11-18 06:59:36.343160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:43.337 [2024-11-18 06:59:36.343175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.785 ms 00:25:43.337 [2024-11-18 06:59:36.343183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.337 [2024-11-18 06:59:36.343211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.337 [2024-11-18 06:59:36.343219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:43.337 [2024-11-18 06:59:36.343229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:43.337 [2024-11-18 06:59:36.343237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.337 [2024-11-18 06:59:36.343278] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:43.337 [2024-11-18 06:59:36.343290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.337 [2024-11-18 06:59:36.343301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:43.337 [2024-11-18 06:59:36.343310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:43.337 [2024-11-18 06:59:36.343320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.337 [2024-11-18 06:59:36.348484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.337 [2024-11-18 06:59:36.348530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:43.337 [2024-11-18 06:59:36.348541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.144 ms 00:25:43.337 [2024-11-18 06:59:36.348550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.337 [2024-11-18 06:59:36.348633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.337 [2024-11-18 06:59:36.348643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:43.337 [2024-11-18 06:59:36.348653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:25:43.337 [2024-11-18 06:59:36.348660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.338 [2024-11-18 06:59:36.349869] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 144.829 ms, result 0 00:25:44.724  [2024-11-18T06:59:38.756Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-18T06:59:39.699Z] Copying: 22/1024 [MB] (10 MBps) [2024-11-18T06:59:40.641Z] Copying: 32/1024 [MB] (10 MBps) [2024-11-18T06:59:41.585Z] Copying: 43/1024 [MB] (10 MBps) [2024-11-18T06:59:42.970Z] Copying: 53/1024 [MB] (10 MBps) [2024-11-18T06:59:43.543Z] Copying: 64/1024 [MB] (10 MBps) [2024-11-18T06:59:44.929Z] Copying: 75/1024 [MB] (10 MBps) [2024-11-18T06:59:45.874Z] Copying: 91/1024 [MB] (16 MBps) [2024-11-18T06:59:46.817Z] Copying: 102/1024 [MB] (10 MBps) [2024-11-18T06:59:47.762Z] Copying: 113/1024 [MB] (11 MBps) [2024-11-18T06:59:48.707Z] Copying: 127/1024 [MB] (14 MBps) [2024-11-18T06:59:49.652Z] Copying: 146/1024 [MB] (18 MBps) [2024-11-18T06:59:50.597Z] Copying: 162/1024 [MB] (16 MBps) [2024-11-18T06:59:51.543Z] Copying: 178/1024 [MB] (15 MBps) [2024-11-18T06:59:52.930Z] Copying: 192/1024 [MB] (13 MBps) [2024-11-18T06:59:53.874Z] Copying: 213/1024 [MB] (21 MBps) [2024-11-18T06:59:54.818Z] Copying: 234/1024 [MB] (20 MBps) [2024-11-18T06:59:55.862Z] Copying: 248/1024 [MB] (14 MBps) [2024-11-18T06:59:56.871Z] Copying: 261/1024 [MB] (12 MBps) [2024-11-18T06:59:57.814Z] Copying: 273/1024 [MB] (12 MBps) [2024-11-18T06:59:58.757Z] Copying: 286/1024 [MB] (12 MBps) [2024-11-18T06:59:59.701Z] Copying: 297/1024 [MB] (11 MBps) [2024-11-18T07:00:00.645Z] Copying: 310/1024 [MB] (12 MBps) [2024-11-18T07:00:01.589Z] Copying: 328/1024 [MB] (17 MBps) [2024-11-18T07:00:02.533Z] Copying: 352/1024 [MB] (23 MBps) [2024-11-18T07:00:03.920Z] Copying: 369/1024 [MB] (16 MBps) [2024-11-18T07:00:04.865Z] Copying: 380/1024 [MB] (11 MBps) [2024-11-18T07:00:05.808Z] Copying: 391/1024 [MB] (10 MBps) [2024-11-18T07:00:06.752Z] Copying: 403/1024 [MB] (12 MBps) [2024-11-18T07:00:07.697Z] Copying: 424/1024 [MB] (21 MBps) [2024-11-18T07:00:08.641Z] Copying: 442/1024 [MB] (17 MBps) [2024-11-18T07:00:09.584Z] Copying: 462/1024 [MB] (19 MBps) [2024-11-18T07:00:10.965Z] Copying: 481/1024 [MB] (19 MBps) [2024-11-18T07:00:11.537Z] Copying: 503/1024 [MB] (21 MBps) [2024-11-18T07:00:12.921Z] Copying: 524/1024 [MB] (20 MBps) [2024-11-18T07:00:13.862Z] Copying: 545/1024 [MB] (21 MBps) [2024-11-18T07:00:14.805Z] Copying: 566/1024 [MB] (20 MBps) [2024-11-18T07:00:15.750Z] Copying: 589/1024 [MB] (23 MBps) [2024-11-18T07:00:16.695Z] Copying: 607/1024 [MB] (18 MBps) [2024-11-18T07:00:17.639Z] Copying: 627/1024 [MB] (19 MBps) [2024-11-18T07:00:18.580Z] Copying: 649/1024 [MB] (21 MBps) [2024-11-18T07:00:19.965Z] Copying: 673/1024 [MB] (23 MBps) [2024-11-18T07:00:20.537Z] Copying: 687/1024 [MB] (14 MBps) [2024-11-18T07:00:21.924Z] Copying: 707/1024 [MB] (19 MBps) [2024-11-18T07:00:22.864Z] Copying: 727/1024 [MB] (20 MBps) [2024-11-18T07:00:23.806Z] Copying: 738/1024 [MB] (10 MBps) [2024-11-18T07:00:24.750Z] Copying: 749/1024 [MB] (10 MBps) [2024-11-18T07:00:25.695Z] Copying: 760/1024 [MB] (10 MBps) [2024-11-18T07:00:26.639Z] Copying: 771/1024 [MB] (10 MBps) [2024-11-18T07:00:27.581Z] Copying: 782/1024 [MB] (11 MBps) [2024-11-18T07:00:28.965Z] Copying: 794/1024 [MB] (11 MBps) [2024-11-18T07:00:29.537Z] Copying: 805/1024 [MB] (11 MBps) [2024-11-18T07:00:30.923Z] Copying: 815/1024 [MB] (10 MBps) [2024-11-18T07:00:31.868Z] Copying: 826/1024 [MB] (10 MBps) [2024-11-18T07:00:32.811Z] Copying: 837/1024 [MB] (10 MBps) [2024-11-18T07:00:33.753Z] Copying: 847/1024 [MB] (10 MBps) [2024-11-18T07:00:34.694Z] Copying: 858/1024 [MB] (10 MBps) [2024-11-18T07:00:35.640Z] Copying: 869/1024 [MB] (10 MBps) [2024-11-18T07:00:36.632Z] Copying: 883/1024 [MB] (14 MBps) [2024-11-18T07:00:37.573Z] Copying: 901/1024 [MB] (18 MBps) [2024-11-18T07:00:38.955Z] Copying: 922/1024 [MB] (21 MBps) [2024-11-18T07:00:39.898Z] Copying: 939/1024 [MB] (16 MBps) [2024-11-18T07:00:40.841Z] Copying: 955/1024 [MB] (15 MBps) [2024-11-18T07:00:41.783Z] Copying: 976/1024 [MB] (21 MBps) [2024-11-18T07:00:42.724Z] Copying: 990/1024 [MB] (13 MBps) [2024-11-18T07:00:43.668Z] Copying: 1009/1024 [MB] (19 MBps) [2024-11-18T07:00:43.668Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-18 07:00:43.535114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.581 [2024-11-18 07:00:43.535201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:50.581 [2024-11-18 07:00:43.535219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:50.581 [2024-11-18 07:00:43.535243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.581 [2024-11-18 07:00:43.535269] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:50.581 [2024-11-18 07:00:43.536079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.581 [2024-11-18 07:00:43.536108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:50.581 [2024-11-18 07:00:43.536121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.793 ms 00:26:50.581 [2024-11-18 07:00:43.536132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.581 [2024-11-18 07:00:43.536380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.581 [2024-11-18 07:00:43.536393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:50.581 [2024-11-18 07:00:43.536402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:26:50.581 [2024-11-18 07:00:43.536411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.581 [2024-11-18 07:00:43.542006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.581 [2024-11-18 07:00:43.542045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:50.581 [2024-11-18 07:00:43.542061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.574 ms 00:26:50.581 [2024-11-18 07:00:43.542089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.581 [2024-11-18 07:00:43.551421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.581 [2024-11-18 07:00:43.551462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:50.581 [2024-11-18 07:00:43.551473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.280 ms 00:26:50.581 [2024-11-18 07:00:43.551482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.581 [2024-11-18 07:00:43.554589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.581 [2024-11-18 07:00:43.554801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:50.581 [2024-11-18 07:00:43.554822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.033 ms 00:26:50.581 [2024-11-18 07:00:43.554831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.581 [2024-11-18 07:00:43.559332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.581 [2024-11-18 07:00:43.559384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:50.581 [2024-11-18 07:00:43.559395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.459 ms 00:26:50.581 [2024-11-18 07:00:43.559404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.581 [2024-11-18 07:00:43.563789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.581 [2024-11-18 07:00:43.563835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:50.581 [2024-11-18 07:00:43.563847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.337 ms 00:26:50.581 [2024-11-18 07:00:43.563865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.581 [2024-11-18 07:00:43.566866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.581 [2024-11-18 07:00:43.566911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:50.581 [2024-11-18 07:00:43.566921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.974 ms 00:26:50.581 [2024-11-18 07:00:43.566928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.581 [2024-11-18 07:00:43.569792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.581 [2024-11-18 07:00:43.569963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:50.581 [2024-11-18 07:00:43.569996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.823 ms 00:26:50.581 [2024-11-18 07:00:43.570005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.581 [2024-11-18 07:00:43.572454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.581 [2024-11-18 07:00:43.572519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:50.581 [2024-11-18 07:00:43.572530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.128 ms 00:26:50.581 [2024-11-18 07:00:43.572539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.581 [2024-11-18 07:00:43.574838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.581 [2024-11-18 07:00:43.574883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:50.581 [2024-11-18 07:00:43.574892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.222 ms 00:26:50.581 [2024-11-18 07:00:43.574900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.581 [2024-11-18 07:00:43.574940] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:50.581 [2024-11-18 07:00:43.574955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:50.581 [2024-11-18 07:00:43.574966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:50.581 [2024-11-18 07:00:43.574993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:50.581 [2024-11-18 07:00:43.575002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:50.581 [2024-11-18 07:00:43.575010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:50.581 [2024-11-18 07:00:43.575019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:50.582 [2024-11-18 07:00:43.575592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:50.583 [2024-11-18 07:00:43.575600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:50.583 [2024-11-18 07:00:43.575607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:50.583 [2024-11-18 07:00:43.575615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:50.583 [2024-11-18 07:00:43.575623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:50.583 [2024-11-18 07:00:43.575630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:50.583 [2024-11-18 07:00:43.575638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:50.583 [2024-11-18 07:00:43.575646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:50.583 [2024-11-18 07:00:43.575654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:50.583 [2024-11-18 07:00:43.575661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:50.583 [2024-11-18 07:00:43.575668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:50.583 [2024-11-18 07:00:43.575676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:50.583 [2024-11-18 07:00:43.575683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:50.583 [2024-11-18 07:00:43.575690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:50.583 [2024-11-18 07:00:43.575698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:50.583 [2024-11-18 07:00:43.575705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:50.583 [2024-11-18 07:00:43.575713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:50.583 [2024-11-18 07:00:43.575721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:50.583 [2024-11-18 07:00:43.575728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:50.583 [2024-11-18 07:00:43.575736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:50.583 [2024-11-18 07:00:43.575743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:50.583 [2024-11-18 07:00:43.575751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:50.583 [2024-11-18 07:00:43.575766] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:50.583 [2024-11-18 07:00:43.575775] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 77747828-95d0-4e37-a793-0f30a108e4e5 00:26:50.583 [2024-11-18 07:00:43.575784] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:50.583 [2024-11-18 07:00:43.575793] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:50.583 [2024-11-18 07:00:43.575801] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:50.583 [2024-11-18 07:00:43.575809] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:50.583 [2024-11-18 07:00:43.575817] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:50.583 [2024-11-18 07:00:43.575825] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:50.583 [2024-11-18 07:00:43.575834] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:50.583 [2024-11-18 07:00:43.575841] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:50.583 [2024-11-18 07:00:43.575847] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:50.583 [2024-11-18 07:00:43.575865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.583 [2024-11-18 07:00:43.575883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:50.583 [2024-11-18 07:00:43.575893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.927 ms 00:26:50.583 [2024-11-18 07:00:43.575900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.583 [2024-11-18 07:00:43.578117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.583 [2024-11-18 07:00:43.578146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:50.583 [2024-11-18 07:00:43.578157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.198 ms 00:26:50.583 [2024-11-18 07:00:43.578165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.583 [2024-11-18 07:00:43.578295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.583 [2024-11-18 07:00:43.578305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:50.583 [2024-11-18 07:00:43.578314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:26:50.583 [2024-11-18 07:00:43.578322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.583 [2024-11-18 07:00:43.585547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.583 [2024-11-18 07:00:43.585596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:50.583 [2024-11-18 07:00:43.585606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.583 [2024-11-18 07:00:43.585619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.583 [2024-11-18 07:00:43.585679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.583 [2024-11-18 07:00:43.585688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:50.583 [2024-11-18 07:00:43.585702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.583 [2024-11-18 07:00:43.585710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.583 [2024-11-18 07:00:43.585771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.583 [2024-11-18 07:00:43.585781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:50.583 [2024-11-18 07:00:43.585789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.583 [2024-11-18 07:00:43.585796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.583 [2024-11-18 07:00:43.585814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.583 [2024-11-18 07:00:43.585823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:50.583 [2024-11-18 07:00:43.585831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.583 [2024-11-18 07:00:43.585840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.583 [2024-11-18 07:00:43.599091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.583 [2024-11-18 07:00:43.599307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:50.583 [2024-11-18 07:00:43.599327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.583 [2024-11-18 07:00:43.599335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.583 [2024-11-18 07:00:43.609288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.583 [2024-11-18 07:00:43.609333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:50.583 [2024-11-18 07:00:43.609345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.583 [2024-11-18 07:00:43.609353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.583 [2024-11-18 07:00:43.609402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.583 [2024-11-18 07:00:43.609413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:50.583 [2024-11-18 07:00:43.609430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.583 [2024-11-18 07:00:43.609438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.583 [2024-11-18 07:00:43.609474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.583 [2024-11-18 07:00:43.609487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:50.583 [2024-11-18 07:00:43.609495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.583 [2024-11-18 07:00:43.609503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.583 [2024-11-18 07:00:43.609573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.583 [2024-11-18 07:00:43.609584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:50.583 [2024-11-18 07:00:43.609592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.583 [2024-11-18 07:00:43.609600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.583 [2024-11-18 07:00:43.609628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.583 [2024-11-18 07:00:43.609642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:50.583 [2024-11-18 07:00:43.609653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.583 [2024-11-18 07:00:43.609660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.583 [2024-11-18 07:00:43.609700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.583 [2024-11-18 07:00:43.609709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:50.583 [2024-11-18 07:00:43.609717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.583 [2024-11-18 07:00:43.609726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.583 [2024-11-18 07:00:43.609772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.583 [2024-11-18 07:00:43.609786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:50.583 [2024-11-18 07:00:43.609798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.583 [2024-11-18 07:00:43.609807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.583 [2024-11-18 07:00:43.609932] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 74.789 ms, result 0 00:26:50.844 00:26:50.844 00:26:50.844 07:00:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:53.390 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:53.390 07:00:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:53.390 07:00:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:53.390 07:00:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:53.390 07:00:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:53.390 07:00:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:53.390 07:00:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:53.390 07:00:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:53.390 07:00:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 88879 00:26:53.390 07:00:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 88879 ']' 00:26:53.390 07:00:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 88879 00:26:53.390 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (88879) - No such process 00:26:53.390 Process with pid 88879 is not found 00:26:53.390 07:00:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 88879 is not found' 00:26:53.390 07:00:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:53.651 Remove shared memory files 00:26:53.651 07:00:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:53.651 07:00:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:53.651 07:00:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:53.651 07:00:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:53.651 07:00:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:26:53.651 07:00:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:53.651 07:00:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:53.651 ************************************ 00:26:53.651 END TEST ftl_dirty_shutdown 00:26:53.651 ************************************ 00:26:53.651 00:26:53.651 real 4m22.166s 00:26:53.651 user 4m46.737s 00:26:53.651 sys 0m27.478s 00:26:53.651 07:00:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:53.651 07:00:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:53.651 07:00:46 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:53.651 07:00:46 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:53.651 07:00:46 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:53.651 07:00:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:53.651 ************************************ 00:26:53.651 START TEST ftl_upgrade_shutdown 00:26:53.651 ************************************ 00:26:53.651 07:00:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:53.911 * Looking for test storage... 00:26:53.911 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:53.911 07:00:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:53.911 07:00:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:26:53.911 07:00:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:53.911 07:00:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:53.911 07:00:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:53.911 07:00:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:53.911 07:00:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:53.911 07:00:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:53.911 07:00:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:53.911 07:00:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:53.911 07:00:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:53.911 07:00:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:53.911 07:00:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:53.911 07:00:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:53.911 07:00:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:53.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.912 --rc genhtml_branch_coverage=1 00:26:53.912 --rc genhtml_function_coverage=1 00:26:53.912 --rc genhtml_legend=1 00:26:53.912 --rc geninfo_all_blocks=1 00:26:53.912 --rc geninfo_unexecuted_blocks=1 00:26:53.912 00:26:53.912 ' 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:53.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.912 --rc genhtml_branch_coverage=1 00:26:53.912 --rc genhtml_function_coverage=1 00:26:53.912 --rc genhtml_legend=1 00:26:53.912 --rc geninfo_all_blocks=1 00:26:53.912 --rc geninfo_unexecuted_blocks=1 00:26:53.912 00:26:53.912 ' 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:53.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.912 --rc genhtml_branch_coverage=1 00:26:53.912 --rc genhtml_function_coverage=1 00:26:53.912 --rc genhtml_legend=1 00:26:53.912 --rc geninfo_all_blocks=1 00:26:53.912 --rc geninfo_unexecuted_blocks=1 00:26:53.912 00:26:53.912 ' 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:53.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.912 --rc genhtml_branch_coverage=1 00:26:53.912 --rc genhtml_function_coverage=1 00:26:53.912 --rc genhtml_legend=1 00:26:53.912 --rc geninfo_all_blocks=1 00:26:53.912 --rc geninfo_unexecuted_blocks=1 00:26:53.912 00:26:53.912 ' 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=91716 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 91716 00:26:53.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 91716 ']' 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:53.912 07:00:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:53.912 [2024-11-18 07:00:46.949062] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:26:53.912 [2024-11-18 07:00:46.949298] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91716 ] 00:26:54.172 [2024-11-18 07:00:47.108275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.172 [2024-11-18 07:00:47.136664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.743 07:00:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:54.743 07:00:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:26:54.743 07:00:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:54.743 07:00:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:26:54.743 07:00:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:26:54.743 07:00:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:54.743 07:00:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:26:54.743 07:00:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:54.743 07:00:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:26:54.743 07:00:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:54.743 07:00:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:26:54.743 07:00:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:54.743 07:00:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:26:54.743 07:00:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:54.743 07:00:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:26:54.743 07:00:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:54.743 07:00:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:26:54.743 07:00:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:26:54.743 07:00:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:26:54.743 07:00:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:54.743 07:00:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:26:54.743 07:00:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:26:54.743 07:00:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:26:55.315 07:00:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:26:55.315 07:00:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:26:55.315 07:00:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:26:55.315 07:00:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:26:55.315 07:00:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:55.315 07:00:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:55.315 07:00:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:55.315 07:00:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:26:55.315 07:00:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:55.315 { 00:26:55.315 "name": "basen1", 00:26:55.315 "aliases": [ 00:26:55.315 "e0c6804b-666d-4e4c-a0a3-d38ed0c40e0f" 00:26:55.315 ], 00:26:55.315 "product_name": "NVMe disk", 00:26:55.315 "block_size": 4096, 00:26:55.315 "num_blocks": 1310720, 00:26:55.315 "uuid": "e0c6804b-666d-4e4c-a0a3-d38ed0c40e0f", 00:26:55.315 "numa_id": -1, 00:26:55.315 "assigned_rate_limits": { 00:26:55.315 "rw_ios_per_sec": 0, 00:26:55.315 "rw_mbytes_per_sec": 0, 00:26:55.315 "r_mbytes_per_sec": 0, 00:26:55.315 "w_mbytes_per_sec": 0 00:26:55.315 }, 00:26:55.315 "claimed": true, 00:26:55.315 "claim_type": "read_many_write_one", 00:26:55.315 "zoned": false, 00:26:55.315 "supported_io_types": { 00:26:55.315 "read": true, 00:26:55.315 "write": true, 00:26:55.315 "unmap": true, 00:26:55.315 "flush": true, 00:26:55.315 "reset": true, 00:26:55.315 "nvme_admin": true, 00:26:55.315 "nvme_io": true, 00:26:55.315 "nvme_io_md": false, 00:26:55.315 "write_zeroes": true, 00:26:55.315 "zcopy": false, 00:26:55.315 "get_zone_info": false, 00:26:55.315 "zone_management": false, 00:26:55.315 "zone_append": false, 00:26:55.315 "compare": true, 00:26:55.315 "compare_and_write": false, 00:26:55.315 "abort": true, 00:26:55.315 "seek_hole": false, 00:26:55.315 "seek_data": false, 00:26:55.315 "copy": true, 00:26:55.315 "nvme_iov_md": false 00:26:55.315 }, 00:26:55.315 "driver_specific": { 00:26:55.315 "nvme": [ 00:26:55.315 { 00:26:55.315 "pci_address": "0000:00:11.0", 00:26:55.315 "trid": { 00:26:55.315 "trtype": "PCIe", 00:26:55.315 "traddr": "0000:00:11.0" 00:26:55.315 }, 00:26:55.315 "ctrlr_data": { 00:26:55.315 "cntlid": 0, 00:26:55.315 "vendor_id": "0x1b36", 00:26:55.315 "model_number": "QEMU NVMe Ctrl", 00:26:55.315 "serial_number": "12341", 00:26:55.315 "firmware_revision": "8.0.0", 00:26:55.315 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:55.315 "oacs": { 00:26:55.315 "security": 0, 00:26:55.315 "format": 1, 00:26:55.315 "firmware": 0, 00:26:55.315 "ns_manage": 1 00:26:55.315 }, 00:26:55.315 "multi_ctrlr": false, 00:26:55.315 "ana_reporting": false 00:26:55.315 }, 00:26:55.315 "vs": { 00:26:55.315 "nvme_version": "1.4" 00:26:55.315 }, 00:26:55.315 "ns_data": { 00:26:55.315 "id": 1, 00:26:55.315 "can_share": false 00:26:55.315 } 00:26:55.315 } 00:26:55.315 ], 00:26:55.315 "mp_policy": "active_passive" 00:26:55.315 } 00:26:55.315 } 00:26:55.315 ]' 00:26:55.315 07:00:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:55.315 07:00:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:55.315 07:00:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:55.315 07:00:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:26:55.315 07:00:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:26:55.315 07:00:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:26:55.315 07:00:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:26:55.315 07:00:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:26:55.315 07:00:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:26:55.315 07:00:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:55.315 07:00:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:55.575 07:00:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=100ec8c3-0934-4ef7-bc5a-07ddc3d4c1d2 00:26:55.575 07:00:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:26:55.575 07:00:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 100ec8c3-0934-4ef7-bc5a-07ddc3d4c1d2 00:26:55.835 07:00:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:26:56.096 07:00:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=a9108748-7591-4fa3-a3c4-16f4d99abbfe 00:26:56.096 07:00:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u a9108748-7591-4fa3-a3c4-16f4d99abbfe 00:26:56.356 07:00:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=cd347926-d1ba-4456-8192-82ac12c3d43e 00:26:56.356 07:00:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z cd347926-d1ba-4456-8192-82ac12c3d43e ]] 00:26:56.356 07:00:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 cd347926-d1ba-4456-8192-82ac12c3d43e 5120 00:26:56.356 07:00:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:26:56.356 07:00:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:56.356 07:00:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=cd347926-d1ba-4456-8192-82ac12c3d43e 00:26:56.356 07:00:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:26:56.356 07:00:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size cd347926-d1ba-4456-8192-82ac12c3d43e 00:26:56.356 07:00:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=cd347926-d1ba-4456-8192-82ac12c3d43e 00:26:56.356 07:00:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:56.356 07:00:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:56.356 07:00:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:56.356 07:00:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cd347926-d1ba-4456-8192-82ac12c3d43e 00:26:56.615 07:00:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:56.615 { 00:26:56.615 "name": "cd347926-d1ba-4456-8192-82ac12c3d43e", 00:26:56.615 "aliases": [ 00:26:56.615 "lvs/basen1p0" 00:26:56.615 ], 00:26:56.615 "product_name": "Logical Volume", 00:26:56.615 "block_size": 4096, 00:26:56.615 "num_blocks": 5242880, 00:26:56.615 "uuid": "cd347926-d1ba-4456-8192-82ac12c3d43e", 00:26:56.615 "assigned_rate_limits": { 00:26:56.615 "rw_ios_per_sec": 0, 00:26:56.615 "rw_mbytes_per_sec": 0, 00:26:56.615 "r_mbytes_per_sec": 0, 00:26:56.615 "w_mbytes_per_sec": 0 00:26:56.615 }, 00:26:56.615 "claimed": false, 00:26:56.615 "zoned": false, 00:26:56.615 "supported_io_types": { 00:26:56.615 "read": true, 00:26:56.615 "write": true, 00:26:56.615 "unmap": true, 00:26:56.615 "flush": false, 00:26:56.615 "reset": true, 00:26:56.615 "nvme_admin": false, 00:26:56.615 "nvme_io": false, 00:26:56.615 "nvme_io_md": false, 00:26:56.615 "write_zeroes": true, 00:26:56.615 "zcopy": false, 00:26:56.615 "get_zone_info": false, 00:26:56.615 "zone_management": false, 00:26:56.615 "zone_append": false, 00:26:56.615 "compare": false, 00:26:56.615 "compare_and_write": false, 00:26:56.615 "abort": false, 00:26:56.615 "seek_hole": true, 00:26:56.615 "seek_data": true, 00:26:56.615 "copy": false, 00:26:56.615 "nvme_iov_md": false 00:26:56.615 }, 00:26:56.615 "driver_specific": { 00:26:56.615 "lvol": { 00:26:56.615 "lvol_store_uuid": "a9108748-7591-4fa3-a3c4-16f4d99abbfe", 00:26:56.615 "base_bdev": "basen1", 00:26:56.615 "thin_provision": true, 00:26:56.615 "num_allocated_clusters": 0, 00:26:56.615 "snapshot": false, 00:26:56.615 "clone": false, 00:26:56.615 "esnap_clone": false 00:26:56.615 } 00:26:56.615 } 00:26:56.615 } 00:26:56.615 ]' 00:26:56.615 07:00:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:56.615 07:00:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:56.615 07:00:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:56.615 07:00:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:26:56.615 07:00:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:26:56.615 07:00:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:26:56.615 07:00:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:26:56.615 07:00:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:26:56.615 07:00:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:26:56.875 07:00:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:26:56.875 07:00:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:26:56.875 07:00:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:26:57.134 07:00:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:26:57.134 07:00:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:26:57.134 07:00:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d cd347926-d1ba-4456-8192-82ac12c3d43e -c cachen1p0 --l2p_dram_limit 2 00:26:57.134 [2024-11-18 07:00:50.176697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.134 [2024-11-18 07:00:50.176743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:57.134 [2024-11-18 07:00:50.176755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:57.134 [2024-11-18 07:00:50.176765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.134 [2024-11-18 07:00:50.176808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.134 [2024-11-18 07:00:50.176817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:57.134 [2024-11-18 07:00:50.176825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:26:57.134 [2024-11-18 07:00:50.176834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.134 [2024-11-18 07:00:50.176848] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:57.134 [2024-11-18 07:00:50.177186] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:57.134 [2024-11-18 07:00:50.177203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.134 [2024-11-18 07:00:50.177210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:57.134 [2024-11-18 07:00:50.177217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.358 ms 00:26:57.134 [2024-11-18 07:00:50.177226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.134 [2024-11-18 07:00:50.177250] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID f81c9f16-3d43-431e-b00e-49c2eed8fcc9 00:26:57.134 [2024-11-18 07:00:50.178215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.134 [2024-11-18 07:00:50.178237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:26:57.134 [2024-11-18 07:00:50.178249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:26:57.134 [2024-11-18 07:00:50.178257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.134 [2024-11-18 07:00:50.182788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.134 [2024-11-18 07:00:50.182813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:57.134 [2024-11-18 07:00:50.182821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.472 ms 00:26:57.134 [2024-11-18 07:00:50.182830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.134 [2024-11-18 07:00:50.182868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.134 [2024-11-18 07:00:50.182877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:57.134 [2024-11-18 07:00:50.182885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:26:57.134 [2024-11-18 07:00:50.182891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.134 [2024-11-18 07:00:50.182934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.134 [2024-11-18 07:00:50.182941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:57.134 [2024-11-18 07:00:50.182951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:57.134 [2024-11-18 07:00:50.182956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.134 [2024-11-18 07:00:50.182984] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:57.134 [2024-11-18 07:00:50.184251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.134 [2024-11-18 07:00:50.184275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:57.134 [2024-11-18 07:00:50.184282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.283 ms 00:26:57.134 [2024-11-18 07:00:50.184289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.134 [2024-11-18 07:00:50.184308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.134 [2024-11-18 07:00:50.184315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:57.134 [2024-11-18 07:00:50.184324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:57.134 [2024-11-18 07:00:50.184333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.134 [2024-11-18 07:00:50.184345] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:26:57.134 [2024-11-18 07:00:50.184451] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:57.134 [2024-11-18 07:00:50.184461] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:57.134 [2024-11-18 07:00:50.184471] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:57.134 [2024-11-18 07:00:50.184481] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:57.134 [2024-11-18 07:00:50.184492] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:57.134 [2024-11-18 07:00:50.184500] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:57.134 [2024-11-18 07:00:50.184513] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:57.134 [2024-11-18 07:00:50.184518] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:57.134 [2024-11-18 07:00:50.184525] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:57.134 [2024-11-18 07:00:50.184531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.134 [2024-11-18 07:00:50.184538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:57.134 [2024-11-18 07:00:50.184544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.187 ms 00:26:57.134 [2024-11-18 07:00:50.184551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.134 [2024-11-18 07:00:50.184613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.134 [2024-11-18 07:00:50.184627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:57.134 [2024-11-18 07:00:50.184633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:26:57.134 [2024-11-18 07:00:50.184639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.134 [2024-11-18 07:00:50.184718] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:57.134 [2024-11-18 07:00:50.184727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:57.134 [2024-11-18 07:00:50.184734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:57.134 [2024-11-18 07:00:50.184742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:57.134 [2024-11-18 07:00:50.184751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:57.134 [2024-11-18 07:00:50.184758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:57.134 [2024-11-18 07:00:50.184763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:57.134 [2024-11-18 07:00:50.184770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:57.134 [2024-11-18 07:00:50.184775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:57.134 [2024-11-18 07:00:50.184782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:57.134 [2024-11-18 07:00:50.184788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:57.134 [2024-11-18 07:00:50.184795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:57.134 [2024-11-18 07:00:50.184801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:57.134 [2024-11-18 07:00:50.184809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:57.134 [2024-11-18 07:00:50.184814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:57.135 [2024-11-18 07:00:50.184820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:57.135 [2024-11-18 07:00:50.184825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:57.135 [2024-11-18 07:00:50.184832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:57.135 [2024-11-18 07:00:50.184836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:57.135 [2024-11-18 07:00:50.184843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:57.135 [2024-11-18 07:00:50.184848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:57.135 [2024-11-18 07:00:50.184854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:57.135 [2024-11-18 07:00:50.184860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:57.135 [2024-11-18 07:00:50.184866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:57.135 [2024-11-18 07:00:50.184871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:57.135 [2024-11-18 07:00:50.184877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:57.135 [2024-11-18 07:00:50.184882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:57.135 [2024-11-18 07:00:50.184889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:57.135 [2024-11-18 07:00:50.184893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:57.135 [2024-11-18 07:00:50.184903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:57.135 [2024-11-18 07:00:50.184909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:57.135 [2024-11-18 07:00:50.184916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:57.135 [2024-11-18 07:00:50.184922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:57.135 [2024-11-18 07:00:50.184930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:57.135 [2024-11-18 07:00:50.184935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:57.135 [2024-11-18 07:00:50.184942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:57.135 [2024-11-18 07:00:50.184948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:57.135 [2024-11-18 07:00:50.184955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:57.135 [2024-11-18 07:00:50.184961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:57.135 [2024-11-18 07:00:50.184968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:57.135 [2024-11-18 07:00:50.184984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:57.135 [2024-11-18 07:00:50.184992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:57.135 [2024-11-18 07:00:50.184998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:57.135 [2024-11-18 07:00:50.185006] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:57.135 [2024-11-18 07:00:50.185013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:57.135 [2024-11-18 07:00:50.185022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:57.135 [2024-11-18 07:00:50.185028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:57.135 [2024-11-18 07:00:50.185038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:57.135 [2024-11-18 07:00:50.185044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:57.135 [2024-11-18 07:00:50.185051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:57.135 [2024-11-18 07:00:50.185057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:57.135 [2024-11-18 07:00:50.185065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:57.135 [2024-11-18 07:00:50.185070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:57.135 [2024-11-18 07:00:50.185081] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:57.135 [2024-11-18 07:00:50.185091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:57.135 [2024-11-18 07:00:50.185099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:57.135 [2024-11-18 07:00:50.185106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:57.135 [2024-11-18 07:00:50.185114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:57.135 [2024-11-18 07:00:50.185120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:57.135 [2024-11-18 07:00:50.185128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:57.135 [2024-11-18 07:00:50.185134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:57.135 [2024-11-18 07:00:50.185143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:57.135 [2024-11-18 07:00:50.185149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:57.135 [2024-11-18 07:00:50.185156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:57.135 [2024-11-18 07:00:50.185163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:57.135 [2024-11-18 07:00:50.185171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:57.135 [2024-11-18 07:00:50.185177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:57.135 [2024-11-18 07:00:50.185184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:57.135 [2024-11-18 07:00:50.185190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:57.135 [2024-11-18 07:00:50.185198] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:57.135 [2024-11-18 07:00:50.185205] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:57.135 [2024-11-18 07:00:50.185212] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:57.135 [2024-11-18 07:00:50.185218] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:57.135 [2024-11-18 07:00:50.185226] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:57.135 [2024-11-18 07:00:50.185237] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:57.135 [2024-11-18 07:00:50.185245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.135 [2024-11-18 07:00:50.185252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:57.135 [2024-11-18 07:00:50.185265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.576 ms 00:26:57.135 [2024-11-18 07:00:50.185272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.135 [2024-11-18 07:00:50.185303] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:26:57.135 [2024-11-18 07:00:50.185311] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:01.349 [2024-11-18 07:00:54.044745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.349 [2024-11-18 07:00:54.044835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:01.349 [2024-11-18 07:00:54.044856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3859.413 ms 00:27:01.349 [2024-11-18 07:00:54.044868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.349 [2024-11-18 07:00:54.059493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.349 [2024-11-18 07:00:54.059552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:01.349 [2024-11-18 07:00:54.059569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.479 ms 00:27:01.349 [2024-11-18 07:00:54.059579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.349 [2024-11-18 07:00:54.059664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.349 [2024-11-18 07:00:54.059675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:01.349 [2024-11-18 07:00:54.059687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:01.349 [2024-11-18 07:00:54.059695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.349 [2024-11-18 07:00:54.072427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.349 [2024-11-18 07:00:54.072479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:01.349 [2024-11-18 07:00:54.072494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.663 ms 00:27:01.349 [2024-11-18 07:00:54.072508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.349 [2024-11-18 07:00:54.072547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.349 [2024-11-18 07:00:54.072555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:01.349 [2024-11-18 07:00:54.072566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:01.349 [2024-11-18 07:00:54.072574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.349 [2024-11-18 07:00:54.073167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.349 [2024-11-18 07:00:54.073190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:01.349 [2024-11-18 07:00:54.073205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.533 ms 00:27:01.349 [2024-11-18 07:00:54.073215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.349 [2024-11-18 07:00:54.073272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.349 [2024-11-18 07:00:54.073287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:01.349 [2024-11-18 07:00:54.073300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:01.349 [2024-11-18 07:00:54.073310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.349 [2024-11-18 07:00:54.080901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.349 [2024-11-18 07:00:54.080950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:01.349 [2024-11-18 07:00:54.080963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.564 ms 00:27:01.349 [2024-11-18 07:00:54.080970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.349 [2024-11-18 07:00:54.090537] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:01.349 [2024-11-18 07:00:54.091751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.349 [2024-11-18 07:00:54.091799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:01.349 [2024-11-18 07:00:54.091810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.684 ms 00:27:01.349 [2024-11-18 07:00:54.091820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.349 [2024-11-18 07:00:54.116416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.349 [2024-11-18 07:00:54.116490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:01.349 [2024-11-18 07:00:54.116509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.564 ms 00:27:01.349 [2024-11-18 07:00:54.116523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.349 [2024-11-18 07:00:54.116635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.349 [2024-11-18 07:00:54.116649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:01.349 [2024-11-18 07:00:54.116659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:27:01.349 [2024-11-18 07:00:54.116669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.349 [2024-11-18 07:00:54.121599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.349 [2024-11-18 07:00:54.121658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:01.349 [2024-11-18 07:00:54.121670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.889 ms 00:27:01.349 [2024-11-18 07:00:54.121684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.349 [2024-11-18 07:00:54.127026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.349 [2024-11-18 07:00:54.127081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:01.349 [2024-11-18 07:00:54.127092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.294 ms 00:27:01.349 [2024-11-18 07:00:54.127101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.349 [2024-11-18 07:00:54.127429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.349 [2024-11-18 07:00:54.127445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:01.349 [2024-11-18 07:00:54.127454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.282 ms 00:27:01.349 [2024-11-18 07:00:54.127474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.349 [2024-11-18 07:00:54.174914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.349 [2024-11-18 07:00:54.174989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:01.349 [2024-11-18 07:00:54.175006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.417 ms 00:27:01.349 [2024-11-18 07:00:54.175021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.349 [2024-11-18 07:00:54.182597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.349 [2024-11-18 07:00:54.182662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:01.349 [2024-11-18 07:00:54.182680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.494 ms 00:27:01.349 [2024-11-18 07:00:54.182691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.349 [2024-11-18 07:00:54.189352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.349 [2024-11-18 07:00:54.189412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:01.349 [2024-11-18 07:00:54.189424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.609 ms 00:27:01.349 [2024-11-18 07:00:54.189434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.349 [2024-11-18 07:00:54.195635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.349 [2024-11-18 07:00:54.195693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:01.349 [2024-11-18 07:00:54.195704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.148 ms 00:27:01.349 [2024-11-18 07:00:54.195718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.349 [2024-11-18 07:00:54.195771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.349 [2024-11-18 07:00:54.195784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:01.349 [2024-11-18 07:00:54.195794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:01.349 [2024-11-18 07:00:54.195805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.349 [2024-11-18 07:00:54.195907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.349 [2024-11-18 07:00:54.195922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:01.349 [2024-11-18 07:00:54.195931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:27:01.349 [2024-11-18 07:00:54.195941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.349 [2024-11-18 07:00:54.197106] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4019.896 ms, result 0 00:27:01.349 { 00:27:01.349 "name": "ftl", 00:27:01.349 "uuid": "f81c9f16-3d43-431e-b00e-49c2eed8fcc9" 00:27:01.349 } 00:27:01.350 07:00:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:01.350 [2024-11-18 07:00:54.420324] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:01.611 07:00:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:01.611 07:00:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:01.873 [2024-11-18 07:00:54.844685] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:01.873 07:00:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:02.134 [2024-11-18 07:00:55.053002] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:02.134 07:00:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:02.445 07:00:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:02.445 07:00:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:02.445 07:00:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:02.445 07:00:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:02.445 07:00:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:02.445 07:00:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:02.445 07:00:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:02.445 07:00:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:02.445 07:00:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:02.445 07:00:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:02.445 Fill FTL, iteration 1 00:27:02.445 07:00:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:02.445 07:00:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:02.445 07:00:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:02.445 07:00:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:02.445 07:00:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:02.445 07:00:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:02.445 07:00:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=91839 00:27:02.445 07:00:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:02.445 07:00:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 91839 /var/tmp/spdk.tgt.sock 00:27:02.445 07:00:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 91839 ']' 00:27:02.445 07:00:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:02.445 07:00:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:02.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:02.445 07:00:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:02.445 07:00:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:02.445 07:00:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:02.445 07:00:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:02.445 [2024-11-18 07:00:55.469958] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:27:02.445 [2024-11-18 07:00:55.470077] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91839 ] 00:27:02.707 [2024-11-18 07:00:55.628770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.707 [2024-11-18 07:00:55.646681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:03.279 07:00:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:03.279 07:00:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:03.279 07:00:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:03.540 ftln1 00:27:03.540 07:00:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:03.540 07:00:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:03.802 07:00:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:03.802 07:00:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 91839 00:27:03.802 07:00:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 91839 ']' 00:27:03.802 07:00:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 91839 00:27:03.802 07:00:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:27:03.802 07:00:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:03.802 07:00:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91839 00:27:03.802 killing process with pid 91839 00:27:03.802 07:00:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:03.802 07:00:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:03.802 07:00:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91839' 00:27:03.802 07:00:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 91839 00:27:03.802 07:00:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 91839 00:27:04.062 07:00:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:04.062 07:00:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:04.324 [2024-11-18 07:00:57.185854] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:27:04.324 [2024-11-18 07:00:57.185961] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91872 ] 00:27:04.324 [2024-11-18 07:00:57.341445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.324 [2024-11-18 07:00:57.364839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.710  [2024-11-18T07:00:59.749Z] Copying: 192/1024 [MB] (192 MBps) [2024-11-18T07:01:00.688Z] Copying: 431/1024 [MB] (239 MBps) [2024-11-18T07:01:01.629Z] Copying: 689/1024 [MB] (258 MBps) [2024-11-18T07:01:01.889Z] Copying: 945/1024 [MB] (256 MBps) [2024-11-18T07:01:02.150Z] Copying: 1024/1024 [MB] (average 237 MBps) 00:27:09.063 00:27:09.063 07:01:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:09.063 07:01:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:09.063 Calculate MD5 checksum, iteration 1 00:27:09.063 07:01:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:09.063 07:01:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:09.063 07:01:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:09.063 07:01:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:09.063 07:01:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:09.063 07:01:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:09.063 [2024-11-18 07:01:02.089433] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:27:09.063 [2024-11-18 07:01:02.089566] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91925 ] 00:27:09.324 [2024-11-18 07:01:02.243433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.324 [2024-11-18 07:01:02.262109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.711  [2024-11-18T07:01:04.398Z] Copying: 611/1024 [MB] (611 MBps) [2024-11-18T07:01:04.398Z] Copying: 1024/1024 [MB] (average 623 MBps) 00:27:11.311 00:27:11.311 07:01:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:11.311 07:01:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:13.865 07:01:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:13.865 07:01:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=ff48456bf7e2ade94f791ad39ed2263a 00:27:13.865 07:01:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:13.865 07:01:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:13.865 Fill FTL, iteration 2 00:27:13.866 07:01:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:13.866 07:01:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:13.866 07:01:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:13.866 07:01:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:13.866 07:01:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:13.866 07:01:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:13.866 07:01:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:13.866 [2024-11-18 07:01:06.419861] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:27:13.866 [2024-11-18 07:01:06.419954] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91975 ] 00:27:13.866 [2024-11-18 07:01:06.563342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.866 [2024-11-18 07:01:06.579749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:14.804  [2024-11-18T07:01:08.831Z] Copying: 257/1024 [MB] (257 MBps) [2024-11-18T07:01:09.771Z] Copying: 517/1024 [MB] (260 MBps) [2024-11-18T07:01:10.711Z] Copying: 776/1024 [MB] (259 MBps) [2024-11-18T07:01:10.970Z] Copying: 1024/1024 [MB] (average 258 MBps) 00:27:17.883 00:27:17.883 07:01:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:17.883 Calculate MD5 checksum, iteration 2 00:27:17.883 07:01:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:17.883 07:01:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:17.883 07:01:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:17.883 07:01:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:17.883 07:01:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:17.883 07:01:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:17.883 07:01:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:17.883 [2024-11-18 07:01:10.918471] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:27:17.883 [2024-11-18 07:01:10.918730] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92023 ] 00:27:18.144 [2024-11-18 07:01:11.071375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.144 [2024-11-18 07:01:11.092045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:19.522  [2024-11-18T07:01:13.179Z] Copying: 662/1024 [MB] (662 MBps) [2024-11-18T07:01:13.750Z] Copying: 1024/1024 [MB] (average 646 MBps) 00:27:20.663 00:27:20.663 07:01:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:20.663 07:01:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:22.564 07:01:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:22.564 07:01:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=d9533e6a9a950f4a97695671825535ea 00:27:22.564 07:01:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:22.564 07:01:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:22.564 07:01:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:22.564 [2024-11-18 07:01:15.626720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:22.564 [2024-11-18 07:01:15.626870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:22.564 [2024-11-18 07:01:15.626926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:22.564 [2024-11-18 07:01:15.626946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:22.564 [2024-11-18 07:01:15.626993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:22.564 [2024-11-18 07:01:15.627073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:22.564 [2024-11-18 07:01:15.627094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:22.564 [2024-11-18 07:01:15.627111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:22.564 [2024-11-18 07:01:15.627162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:22.564 [2024-11-18 07:01:15.627183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:22.564 [2024-11-18 07:01:15.627199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:22.564 [2024-11-18 07:01:15.627218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:22.564 [2024-11-18 07:01:15.627284] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.548 ms, result 0 00:27:22.564 true 00:27:22.564 07:01:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:22.822 { 00:27:22.822 "name": "ftl", 00:27:22.822 "properties": [ 00:27:22.822 { 00:27:22.822 "name": "superblock_version", 00:27:22.822 "value": 5, 00:27:22.822 "read-only": true 00:27:22.822 }, 00:27:22.822 { 00:27:22.822 "name": "base_device", 00:27:22.822 "bands": [ 00:27:22.822 { 00:27:22.822 "id": 0, 00:27:22.822 "state": "FREE", 00:27:22.822 "validity": 0.0 00:27:22.822 }, 00:27:22.822 { 00:27:22.822 "id": 1, 00:27:22.822 "state": "FREE", 00:27:22.822 "validity": 0.0 00:27:22.822 }, 00:27:22.822 { 00:27:22.822 "id": 2, 00:27:22.822 "state": "FREE", 00:27:22.822 "validity": 0.0 00:27:22.822 }, 00:27:22.822 { 00:27:22.822 "id": 3, 00:27:22.822 "state": "FREE", 00:27:22.822 "validity": 0.0 00:27:22.822 }, 00:27:22.822 { 00:27:22.822 "id": 4, 00:27:22.822 "state": "FREE", 00:27:22.822 "validity": 0.0 00:27:22.822 }, 00:27:22.822 { 00:27:22.822 "id": 5, 00:27:22.822 "state": "FREE", 00:27:22.822 "validity": 0.0 00:27:22.822 }, 00:27:22.822 { 00:27:22.822 "id": 6, 00:27:22.822 "state": "FREE", 00:27:22.822 "validity": 0.0 00:27:22.822 }, 00:27:22.822 { 00:27:22.822 "id": 7, 00:27:22.822 "state": "FREE", 00:27:22.822 "validity": 0.0 00:27:22.822 }, 00:27:22.822 { 00:27:22.822 "id": 8, 00:27:22.822 "state": "FREE", 00:27:22.823 "validity": 0.0 00:27:22.823 }, 00:27:22.823 { 00:27:22.823 "id": 9, 00:27:22.823 "state": "FREE", 00:27:22.823 "validity": 0.0 00:27:22.823 }, 00:27:22.823 { 00:27:22.823 "id": 10, 00:27:22.823 "state": "FREE", 00:27:22.823 "validity": 0.0 00:27:22.823 }, 00:27:22.823 { 00:27:22.823 "id": 11, 00:27:22.823 "state": "FREE", 00:27:22.823 "validity": 0.0 00:27:22.823 }, 00:27:22.823 { 00:27:22.823 "id": 12, 00:27:22.823 "state": "FREE", 00:27:22.823 "validity": 0.0 00:27:22.823 }, 00:27:22.823 { 00:27:22.823 "id": 13, 00:27:22.823 "state": "FREE", 00:27:22.823 "validity": 0.0 00:27:22.823 }, 00:27:22.823 { 00:27:22.823 "id": 14, 00:27:22.823 "state": "FREE", 00:27:22.823 "validity": 0.0 00:27:22.823 }, 00:27:22.823 { 00:27:22.823 "id": 15, 00:27:22.823 "state": "FREE", 00:27:22.823 "validity": 0.0 00:27:22.823 }, 00:27:22.823 { 00:27:22.823 "id": 16, 00:27:22.823 "state": "FREE", 00:27:22.823 "validity": 0.0 00:27:22.823 }, 00:27:22.823 { 00:27:22.823 "id": 17, 00:27:22.823 "state": "FREE", 00:27:22.823 "validity": 0.0 00:27:22.823 } 00:27:22.823 ], 00:27:22.823 "read-only": true 00:27:22.823 }, 00:27:22.823 { 00:27:22.823 "name": "cache_device", 00:27:22.823 "type": "bdev", 00:27:22.823 "chunks": [ 00:27:22.823 { 00:27:22.823 "id": 0, 00:27:22.823 "state": "INACTIVE", 00:27:22.823 "utilization": 0.0 00:27:22.823 }, 00:27:22.823 { 00:27:22.823 "id": 1, 00:27:22.823 "state": "CLOSED", 00:27:22.823 "utilization": 1.0 00:27:22.823 }, 00:27:22.823 { 00:27:22.823 "id": 2, 00:27:22.823 "state": "CLOSED", 00:27:22.823 "utilization": 1.0 00:27:22.823 }, 00:27:22.823 { 00:27:22.823 "id": 3, 00:27:22.823 "state": "OPEN", 00:27:22.823 "utilization": 0.001953125 00:27:22.823 }, 00:27:22.823 { 00:27:22.823 "id": 4, 00:27:22.823 "state": "OPEN", 00:27:22.823 "utilization": 0.0 00:27:22.823 } 00:27:22.823 ], 00:27:22.823 "read-only": true 00:27:22.823 }, 00:27:22.823 { 00:27:22.823 "name": "verbose_mode", 00:27:22.823 "value": true, 00:27:22.823 "unit": "", 00:27:22.823 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:22.823 }, 00:27:22.823 { 00:27:22.823 "name": "prep_upgrade_on_shutdown", 00:27:22.823 "value": false, 00:27:22.823 "unit": "", 00:27:22.823 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:22.823 } 00:27:22.823 ] 00:27:22.823 } 00:27:22.823 07:01:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:23.081 [2024-11-18 07:01:16.027056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.081 [2024-11-18 07:01:16.027157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:23.081 [2024-11-18 07:01:16.027196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:23.081 [2024-11-18 07:01:16.027213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.081 [2024-11-18 07:01:16.027241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.081 [2024-11-18 07:01:16.027257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:23.081 [2024-11-18 07:01:16.027272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:23.081 [2024-11-18 07:01:16.027280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.081 [2024-11-18 07:01:16.027295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.081 [2024-11-18 07:01:16.027301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:23.081 [2024-11-18 07:01:16.027307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:23.081 [2024-11-18 07:01:16.027312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.081 [2024-11-18 07:01:16.027357] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.287 ms, result 0 00:27:23.081 true 00:27:23.081 07:01:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:23.081 07:01:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:23.081 07:01:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:23.339 07:01:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:23.339 07:01:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:23.339 07:01:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:23.598 [2024-11-18 07:01:16.447405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.598 [2024-11-18 07:01:16.447433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:23.598 [2024-11-18 07:01:16.447440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:23.598 [2024-11-18 07:01:16.447446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.598 [2024-11-18 07:01:16.447462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.598 [2024-11-18 07:01:16.447468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:23.598 [2024-11-18 07:01:16.447474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:23.598 [2024-11-18 07:01:16.447480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.598 [2024-11-18 07:01:16.447495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.598 [2024-11-18 07:01:16.447500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:23.598 [2024-11-18 07:01:16.447506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:23.599 [2024-11-18 07:01:16.447511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.599 [2024-11-18 07:01:16.447551] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.133 ms, result 0 00:27:23.599 true 00:27:23.599 07:01:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:23.599 { 00:27:23.599 "name": "ftl", 00:27:23.599 "properties": [ 00:27:23.599 { 00:27:23.599 "name": "superblock_version", 00:27:23.599 "value": 5, 00:27:23.599 "read-only": true 00:27:23.599 }, 00:27:23.599 { 00:27:23.599 "name": "base_device", 00:27:23.599 "bands": [ 00:27:23.599 { 00:27:23.599 "id": 0, 00:27:23.599 "state": "FREE", 00:27:23.599 "validity": 0.0 00:27:23.599 }, 00:27:23.599 { 00:27:23.599 "id": 1, 00:27:23.599 "state": "FREE", 00:27:23.599 "validity": 0.0 00:27:23.599 }, 00:27:23.599 { 00:27:23.599 "id": 2, 00:27:23.599 "state": "FREE", 00:27:23.599 "validity": 0.0 00:27:23.599 }, 00:27:23.599 { 00:27:23.599 "id": 3, 00:27:23.599 "state": "FREE", 00:27:23.599 "validity": 0.0 00:27:23.599 }, 00:27:23.599 { 00:27:23.599 "id": 4, 00:27:23.599 "state": "FREE", 00:27:23.599 "validity": 0.0 00:27:23.599 }, 00:27:23.599 { 00:27:23.599 "id": 5, 00:27:23.599 "state": "FREE", 00:27:23.599 "validity": 0.0 00:27:23.599 }, 00:27:23.599 { 00:27:23.599 "id": 6, 00:27:23.599 "state": "FREE", 00:27:23.599 "validity": 0.0 00:27:23.599 }, 00:27:23.599 { 00:27:23.599 "id": 7, 00:27:23.599 "state": "FREE", 00:27:23.599 "validity": 0.0 00:27:23.599 }, 00:27:23.599 { 00:27:23.599 "id": 8, 00:27:23.599 "state": "FREE", 00:27:23.599 "validity": 0.0 00:27:23.599 }, 00:27:23.599 { 00:27:23.599 "id": 9, 00:27:23.599 "state": "FREE", 00:27:23.599 "validity": 0.0 00:27:23.599 }, 00:27:23.599 { 00:27:23.599 "id": 10, 00:27:23.599 "state": "FREE", 00:27:23.599 "validity": 0.0 00:27:23.599 }, 00:27:23.599 { 00:27:23.599 "id": 11, 00:27:23.599 "state": "FREE", 00:27:23.599 "validity": 0.0 00:27:23.599 }, 00:27:23.599 { 00:27:23.599 "id": 12, 00:27:23.599 "state": "FREE", 00:27:23.599 "validity": 0.0 00:27:23.599 }, 00:27:23.599 { 00:27:23.599 "id": 13, 00:27:23.599 "state": "FREE", 00:27:23.599 "validity": 0.0 00:27:23.599 }, 00:27:23.599 { 00:27:23.599 "id": 14, 00:27:23.599 "state": "FREE", 00:27:23.599 "validity": 0.0 00:27:23.599 }, 00:27:23.599 { 00:27:23.599 "id": 15, 00:27:23.599 "state": "FREE", 00:27:23.599 "validity": 0.0 00:27:23.599 }, 00:27:23.599 { 00:27:23.599 "id": 16, 00:27:23.599 "state": "FREE", 00:27:23.599 "validity": 0.0 00:27:23.599 }, 00:27:23.599 { 00:27:23.599 "id": 17, 00:27:23.599 "state": "FREE", 00:27:23.599 "validity": 0.0 00:27:23.599 } 00:27:23.599 ], 00:27:23.599 "read-only": true 00:27:23.599 }, 00:27:23.599 { 00:27:23.599 "name": "cache_device", 00:27:23.599 "type": "bdev", 00:27:23.599 "chunks": [ 00:27:23.599 { 00:27:23.599 "id": 0, 00:27:23.599 "state": "INACTIVE", 00:27:23.599 "utilization": 0.0 00:27:23.599 }, 00:27:23.599 { 00:27:23.599 "id": 1, 00:27:23.599 "state": "CLOSED", 00:27:23.599 "utilization": 1.0 00:27:23.599 }, 00:27:23.599 { 00:27:23.599 "id": 2, 00:27:23.599 "state": "CLOSED", 00:27:23.599 "utilization": 1.0 00:27:23.599 }, 00:27:23.599 { 00:27:23.599 "id": 3, 00:27:23.599 "state": "OPEN", 00:27:23.599 "utilization": 0.001953125 00:27:23.599 }, 00:27:23.599 { 00:27:23.599 "id": 4, 00:27:23.599 "state": "OPEN", 00:27:23.599 "utilization": 0.0 00:27:23.599 } 00:27:23.599 ], 00:27:23.599 "read-only": true 00:27:23.599 }, 00:27:23.599 { 00:27:23.599 "name": "verbose_mode", 00:27:23.599 "value": true, 00:27:23.599 "unit": "", 00:27:23.599 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:23.599 }, 00:27:23.599 { 00:27:23.599 "name": "prep_upgrade_on_shutdown", 00:27:23.599 "value": true, 00:27:23.599 "unit": "", 00:27:23.599 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:23.599 } 00:27:23.599 ] 00:27:23.599 } 00:27:23.599 07:01:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:23.599 07:01:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 91716 ]] 00:27:23.599 07:01:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 91716 00:27:23.599 07:01:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 91716 ']' 00:27:23.599 07:01:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 91716 00:27:23.599 07:01:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:27:23.599 07:01:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:23.599 07:01:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91716 00:27:23.858 killing process with pid 91716 00:27:23.858 07:01:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:23.858 07:01:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:23.858 07:01:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91716' 00:27:23.858 07:01:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 91716 00:27:23.858 07:01:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 91716 00:27:23.858 [2024-11-18 07:01:16.808495] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:23.858 [2024-11-18 07:01:16.812333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.858 [2024-11-18 07:01:16.812365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:23.858 [2024-11-18 07:01:16.812376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:23.858 [2024-11-18 07:01:16.812383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.858 [2024-11-18 07:01:16.812401] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:23.858 [2024-11-18 07:01:16.812912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.858 [2024-11-18 07:01:16.812927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:23.858 [2024-11-18 07:01:16.812938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.500 ms 00:27:23.858 [2024-11-18 07:01:16.812944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.975 [2024-11-18 07:01:24.235347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.975 [2024-11-18 07:01:24.235412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:31.975 [2024-11-18 07:01:24.235429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7422.346 ms 00:27:31.976 [2024-11-18 07:01:24.235437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.976 [2024-11-18 07:01:24.236792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.976 [2024-11-18 07:01:24.236808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:31.976 [2024-11-18 07:01:24.236816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.343 ms 00:27:31.976 [2024-11-18 07:01:24.236823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.976 [2024-11-18 07:01:24.237684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.976 [2024-11-18 07:01:24.237827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:31.976 [2024-11-18 07:01:24.237845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.841 ms 00:27:31.976 [2024-11-18 07:01:24.237851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.976 [2024-11-18 07:01:24.240127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.976 [2024-11-18 07:01:24.240156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:31.976 [2024-11-18 07:01:24.240164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.246 ms 00:27:31.976 [2024-11-18 07:01:24.240170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.976 [2024-11-18 07:01:24.242898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.976 [2024-11-18 07:01:24.242927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:31.976 [2024-11-18 07:01:24.242935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.702 ms 00:27:31.976 [2024-11-18 07:01:24.242942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.976 [2024-11-18 07:01:24.243010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.976 [2024-11-18 07:01:24.243022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:31.976 [2024-11-18 07:01:24.243030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:27:31.976 [2024-11-18 07:01:24.243037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.976 [2024-11-18 07:01:24.244900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.976 [2024-11-18 07:01:24.245042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:31.976 [2024-11-18 07:01:24.245057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.841 ms 00:27:31.976 [2024-11-18 07:01:24.245064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.976 [2024-11-18 07:01:24.246846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.976 [2024-11-18 07:01:24.246873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:31.976 [2024-11-18 07:01:24.246880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.754 ms 00:27:31.976 [2024-11-18 07:01:24.246886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.976 [2024-11-18 07:01:24.248589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.976 [2024-11-18 07:01:24.248690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:31.976 [2024-11-18 07:01:24.248702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.677 ms 00:27:31.976 [2024-11-18 07:01:24.248709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.976 [2024-11-18 07:01:24.249831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.976 [2024-11-18 07:01:24.249853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:31.976 [2024-11-18 07:01:24.249860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.075 ms 00:27:31.976 [2024-11-18 07:01:24.249866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.976 [2024-11-18 07:01:24.249889] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:31.976 [2024-11-18 07:01:24.249900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:31.976 [2024-11-18 07:01:24.249909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:31.976 [2024-11-18 07:01:24.249915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:31.976 [2024-11-18 07:01:24.249921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:31.976 [2024-11-18 07:01:24.249928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:31.976 [2024-11-18 07:01:24.249934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:31.976 [2024-11-18 07:01:24.249939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:31.976 [2024-11-18 07:01:24.249945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:31.976 [2024-11-18 07:01:24.249951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:31.976 [2024-11-18 07:01:24.249957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:31.976 [2024-11-18 07:01:24.249962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:31.976 [2024-11-18 07:01:24.249968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:31.976 [2024-11-18 07:01:24.249988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:31.976 [2024-11-18 07:01:24.249994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:31.976 [2024-11-18 07:01:24.250000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:31.976 [2024-11-18 07:01:24.250008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:31.976 [2024-11-18 07:01:24.250014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:31.976 [2024-11-18 07:01:24.250020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:31.976 [2024-11-18 07:01:24.250029] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:31.976 [2024-11-18 07:01:24.250036] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: f81c9f16-3d43-431e-b00e-49c2eed8fcc9 00:27:31.976 [2024-11-18 07:01:24.250043] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:31.976 [2024-11-18 07:01:24.250049] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:31.976 [2024-11-18 07:01:24.250055] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:31.976 [2024-11-18 07:01:24.250061] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:31.976 [2024-11-18 07:01:24.250072] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:31.976 [2024-11-18 07:01:24.250080] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:31.976 [2024-11-18 07:01:24.250088] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:31.976 [2024-11-18 07:01:24.250094] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:31.976 [2024-11-18 07:01:24.250100] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:31.976 [2024-11-18 07:01:24.250107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.976 [2024-11-18 07:01:24.250114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:31.976 [2024-11-18 07:01:24.250127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.219 ms 00:27:31.976 [2024-11-18 07:01:24.250133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.976 [2024-11-18 07:01:24.251876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.976 [2024-11-18 07:01:24.251993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:31.976 [2024-11-18 07:01:24.252009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.723 ms 00:27:31.976 [2024-11-18 07:01:24.252016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.976 [2024-11-18 07:01:24.252101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.976 [2024-11-18 07:01:24.252108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:31.976 [2024-11-18 07:01:24.252115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:27:31.976 [2024-11-18 07:01:24.252121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.976 [2024-11-18 07:01:24.258209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:31.976 [2024-11-18 07:01:24.258244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:31.976 [2024-11-18 07:01:24.258253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:31.976 [2024-11-18 07:01:24.258260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.976 [2024-11-18 07:01:24.258290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:31.976 [2024-11-18 07:01:24.258298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:31.976 [2024-11-18 07:01:24.258304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:31.976 [2024-11-18 07:01:24.258311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.976 [2024-11-18 07:01:24.258351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:31.976 [2024-11-18 07:01:24.258359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:31.976 [2024-11-18 07:01:24.258368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:31.976 [2024-11-18 07:01:24.258374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.976 [2024-11-18 07:01:24.258387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:31.976 [2024-11-18 07:01:24.258394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:31.976 [2024-11-18 07:01:24.258401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:31.976 [2024-11-18 07:01:24.258410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.976 [2024-11-18 07:01:24.269657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:31.976 [2024-11-18 07:01:24.269699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:31.976 [2024-11-18 07:01:24.269708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:31.976 [2024-11-18 07:01:24.269714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.976 [2024-11-18 07:01:24.278316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:31.976 [2024-11-18 07:01:24.278349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:31.976 [2024-11-18 07:01:24.278358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:31.976 [2024-11-18 07:01:24.278364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.976 [2024-11-18 07:01:24.278444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:31.976 [2024-11-18 07:01:24.278453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:31.977 [2024-11-18 07:01:24.278460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:31.977 [2024-11-18 07:01:24.278471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.977 [2024-11-18 07:01:24.278496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:31.977 [2024-11-18 07:01:24.278504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:31.977 [2024-11-18 07:01:24.278510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:31.977 [2024-11-18 07:01:24.278517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.977 [2024-11-18 07:01:24.278579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:31.977 [2024-11-18 07:01:24.278586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:31.977 [2024-11-18 07:01:24.278593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:31.977 [2024-11-18 07:01:24.278599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.977 [2024-11-18 07:01:24.278625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:31.977 [2024-11-18 07:01:24.278633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:31.977 [2024-11-18 07:01:24.278639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:31.977 [2024-11-18 07:01:24.278645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.977 [2024-11-18 07:01:24.278681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:31.977 [2024-11-18 07:01:24.278689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:31.977 [2024-11-18 07:01:24.278696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:31.977 [2024-11-18 07:01:24.278702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.977 [2024-11-18 07:01:24.278750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:31.977 [2024-11-18 07:01:24.278759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:31.977 [2024-11-18 07:01:24.278765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:31.977 [2024-11-18 07:01:24.278771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.977 [2024-11-18 07:01:24.278896] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7466.499 ms, result 0 00:27:35.269 07:01:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:35.269 07:01:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:35.269 07:01:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:35.269 07:01:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:35.269 07:01:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:35.269 07:01:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92189 00:27:35.269 07:01:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:35.269 07:01:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92189 00:27:35.269 07:01:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 92189 ']' 00:27:35.269 07:01:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:35.269 07:01:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:35.269 07:01:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:35.269 07:01:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:35.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:35.269 07:01:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:35.269 07:01:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:35.528 [2024-11-18 07:01:28.390223] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:27:35.528 [2024-11-18 07:01:28.390496] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92189 ] 00:27:35.528 [2024-11-18 07:01:28.541739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.528 [2024-11-18 07:01:28.565931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.787 [2024-11-18 07:01:28.864646] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:35.787 [2024-11-18 07:01:28.864872] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:36.049 [2024-11-18 07:01:29.010743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.049 [2024-11-18 07:01:29.010780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:36.049 [2024-11-18 07:01:29.010795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:36.049 [2024-11-18 07:01:29.010801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.049 [2024-11-18 07:01:29.010852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.049 [2024-11-18 07:01:29.010860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:36.049 [2024-11-18 07:01:29.010871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:27:36.049 [2024-11-18 07:01:29.010879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.049 [2024-11-18 07:01:29.010897] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:36.049 [2024-11-18 07:01:29.011112] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:36.049 [2024-11-18 07:01:29.011130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.049 [2024-11-18 07:01:29.011136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:36.049 [2024-11-18 07:01:29.011146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.240 ms 00:27:36.049 [2024-11-18 07:01:29.011152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.049 [2024-11-18 07:01:29.012387] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:36.049 [2024-11-18 07:01:29.015025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.049 [2024-11-18 07:01:29.015056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:36.049 [2024-11-18 07:01:29.015064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.640 ms 00:27:36.049 [2024-11-18 07:01:29.015070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.049 [2024-11-18 07:01:29.015116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.049 [2024-11-18 07:01:29.015124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:36.049 [2024-11-18 07:01:29.015131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:36.049 [2024-11-18 07:01:29.015137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.049 [2024-11-18 07:01:29.021310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.049 [2024-11-18 07:01:29.021335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:36.049 [2024-11-18 07:01:29.021343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.123 ms 00:27:36.049 [2024-11-18 07:01:29.021349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.049 [2024-11-18 07:01:29.021381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.049 [2024-11-18 07:01:29.021388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:36.049 [2024-11-18 07:01:29.021394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:36.049 [2024-11-18 07:01:29.021400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.049 [2024-11-18 07:01:29.021435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.049 [2024-11-18 07:01:29.021446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:36.050 [2024-11-18 07:01:29.021453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:36.050 [2024-11-18 07:01:29.021459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.050 [2024-11-18 07:01:29.021476] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:36.050 [2024-11-18 07:01:29.023045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.050 [2024-11-18 07:01:29.023066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:36.050 [2024-11-18 07:01:29.023073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.574 ms 00:27:36.050 [2024-11-18 07:01:29.023079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.050 [2024-11-18 07:01:29.023103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.050 [2024-11-18 07:01:29.023112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:36.050 [2024-11-18 07:01:29.023118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:36.050 [2024-11-18 07:01:29.023125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.050 [2024-11-18 07:01:29.023141] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:36.050 [2024-11-18 07:01:29.023158] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:36.050 [2024-11-18 07:01:29.023189] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:36.050 [2024-11-18 07:01:29.023203] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:36.050 [2024-11-18 07:01:29.023288] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:36.050 [2024-11-18 07:01:29.023300] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:36.050 [2024-11-18 07:01:29.023308] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:36.050 [2024-11-18 07:01:29.023316] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:36.050 [2024-11-18 07:01:29.023323] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:36.050 [2024-11-18 07:01:29.023333] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:36.050 [2024-11-18 07:01:29.023341] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:36.050 [2024-11-18 07:01:29.023347] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:36.050 [2024-11-18 07:01:29.023353] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:36.050 [2024-11-18 07:01:29.023360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.050 [2024-11-18 07:01:29.023368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:36.050 [2024-11-18 07:01:29.023377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.221 ms 00:27:36.050 [2024-11-18 07:01:29.023382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.050 [2024-11-18 07:01:29.023449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.050 [2024-11-18 07:01:29.023456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:36.050 [2024-11-18 07:01:29.023462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:27:36.050 [2024-11-18 07:01:29.023467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.050 [2024-11-18 07:01:29.023545] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:36.050 [2024-11-18 07:01:29.023553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:36.050 [2024-11-18 07:01:29.023560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:36.050 [2024-11-18 07:01:29.023568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.050 [2024-11-18 07:01:29.023574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:36.050 [2024-11-18 07:01:29.023579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:36.050 [2024-11-18 07:01:29.023585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:36.050 [2024-11-18 07:01:29.023591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:36.050 [2024-11-18 07:01:29.023598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:36.050 [2024-11-18 07:01:29.023603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.050 [2024-11-18 07:01:29.023608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:36.050 [2024-11-18 07:01:29.023614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:36.050 [2024-11-18 07:01:29.023619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.050 [2024-11-18 07:01:29.023625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:36.050 [2024-11-18 07:01:29.023631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:36.050 [2024-11-18 07:01:29.023642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.050 [2024-11-18 07:01:29.023651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:36.050 [2024-11-18 07:01:29.023656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:36.050 [2024-11-18 07:01:29.023661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.050 [2024-11-18 07:01:29.023667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:36.050 [2024-11-18 07:01:29.023672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:36.050 [2024-11-18 07:01:29.023677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:36.050 [2024-11-18 07:01:29.023682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:36.050 [2024-11-18 07:01:29.023687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:36.050 [2024-11-18 07:01:29.023692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:36.050 [2024-11-18 07:01:29.023698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:36.050 [2024-11-18 07:01:29.023705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:36.050 [2024-11-18 07:01:29.023710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:36.050 [2024-11-18 07:01:29.023716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:36.050 [2024-11-18 07:01:29.023721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:36.050 [2024-11-18 07:01:29.023727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:36.050 [2024-11-18 07:01:29.023733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:36.050 [2024-11-18 07:01:29.023742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:36.050 [2024-11-18 07:01:29.023749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.050 [2024-11-18 07:01:29.023754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:36.050 [2024-11-18 07:01:29.023760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:36.050 [2024-11-18 07:01:29.023766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.050 [2024-11-18 07:01:29.023772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:36.050 [2024-11-18 07:01:29.023779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:36.050 [2024-11-18 07:01:29.023785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.050 [2024-11-18 07:01:29.023791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:36.050 [2024-11-18 07:01:29.023796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:36.050 [2024-11-18 07:01:29.023802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.050 [2024-11-18 07:01:29.023808] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:36.050 [2024-11-18 07:01:29.023816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:36.050 [2024-11-18 07:01:29.023822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:36.050 [2024-11-18 07:01:29.023832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.050 [2024-11-18 07:01:29.023839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:36.050 [2024-11-18 07:01:29.023847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:36.050 [2024-11-18 07:01:29.023853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:36.050 [2024-11-18 07:01:29.023860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:36.050 [2024-11-18 07:01:29.023865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:36.050 [2024-11-18 07:01:29.023872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:36.050 [2024-11-18 07:01:29.023879] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:36.050 [2024-11-18 07:01:29.023886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:36.050 [2024-11-18 07:01:29.023894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:36.050 [2024-11-18 07:01:29.023901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:36.050 [2024-11-18 07:01:29.023907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:36.050 [2024-11-18 07:01:29.023914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:36.050 [2024-11-18 07:01:29.023920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:36.050 [2024-11-18 07:01:29.023926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:36.050 [2024-11-18 07:01:29.023933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:36.050 [2024-11-18 07:01:29.023939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:36.050 [2024-11-18 07:01:29.023945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:36.050 [2024-11-18 07:01:29.023953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:36.050 [2024-11-18 07:01:29.023960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:36.050 [2024-11-18 07:01:29.023966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:36.050 [2024-11-18 07:01:29.023973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:36.051 [2024-11-18 07:01:29.023995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:36.051 [2024-11-18 07:01:29.024002] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:36.051 [2024-11-18 07:01:29.024009] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:36.051 [2024-11-18 07:01:29.024019] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:36.051 [2024-11-18 07:01:29.024027] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:36.051 [2024-11-18 07:01:29.024034] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:36.051 [2024-11-18 07:01:29.024041] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:36.051 [2024-11-18 07:01:29.024047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.051 [2024-11-18 07:01:29.024056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:36.051 [2024-11-18 07:01:29.024062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.558 ms 00:27:36.051 [2024-11-18 07:01:29.024072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.051 [2024-11-18 07:01:29.024111] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:36.051 [2024-11-18 07:01:29.024120] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:40.247 [2024-11-18 07:01:32.660933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.247 [2024-11-18 07:01:32.661048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:40.247 [2024-11-18 07:01:32.661069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3636.801 ms 00:27:40.247 [2024-11-18 07:01:32.661079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.247 [2024-11-18 07:01:32.679920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.247 [2024-11-18 07:01:32.679998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:40.247 [2024-11-18 07:01:32.680013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.695 ms 00:27:40.247 [2024-11-18 07:01:32.680023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.247 [2024-11-18 07:01:32.680145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.247 [2024-11-18 07:01:32.680170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:40.247 [2024-11-18 07:01:32.680180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:40.247 [2024-11-18 07:01:32.680189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.247 [2024-11-18 07:01:32.697660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.247 [2024-11-18 07:01:32.697708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:40.247 [2024-11-18 07:01:32.697722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.425 ms 00:27:40.247 [2024-11-18 07:01:32.697733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.247 [2024-11-18 07:01:32.697779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.247 [2024-11-18 07:01:32.697791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:40.247 [2024-11-18 07:01:32.697802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:40.248 [2024-11-18 07:01:32.697822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.248 [2024-11-18 07:01:32.698565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.248 [2024-11-18 07:01:32.698605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:40.248 [2024-11-18 07:01:32.698616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.689 ms 00:27:40.248 [2024-11-18 07:01:32.698625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.248 [2024-11-18 07:01:32.698688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.248 [2024-11-18 07:01:32.698699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:40.248 [2024-11-18 07:01:32.698707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:27:40.248 [2024-11-18 07:01:32.698716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.248 [2024-11-18 07:01:32.710807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.248 [2024-11-18 07:01:32.711040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:40.248 [2024-11-18 07:01:32.711229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.057 ms 00:27:40.248 [2024-11-18 07:01:32.711260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.248 [2024-11-18 07:01:32.716155] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:40.248 [2024-11-18 07:01:32.716350] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:40.248 [2024-11-18 07:01:32.716428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.248 [2024-11-18 07:01:32.716451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:40.248 [2024-11-18 07:01:32.716473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.990 ms 00:27:40.248 [2024-11-18 07:01:32.716493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.248 [2024-11-18 07:01:32.721731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.248 [2024-11-18 07:01:32.721923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:40.248 [2024-11-18 07:01:32.722014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.180 ms 00:27:40.248 [2024-11-18 07:01:32.722040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.248 [2024-11-18 07:01:32.726394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.248 [2024-11-18 07:01:32.726764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:40.248 [2024-11-18 07:01:32.727094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.072 ms 00:27:40.248 [2024-11-18 07:01:32.727129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.248 [2024-11-18 07:01:32.730810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.248 [2024-11-18 07:01:32.731118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:40.248 [2024-11-18 07:01:32.731270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.588 ms 00:27:40.248 [2024-11-18 07:01:32.731331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.248 [2024-11-18 07:01:32.733307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.248 [2024-11-18 07:01:32.733379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:40.248 [2024-11-18 07:01:32.733411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.796 ms 00:27:40.248 [2024-11-18 07:01:32.733432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.248 [2024-11-18 07:01:32.776771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.248 [2024-11-18 07:01:32.777051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:40.248 [2024-11-18 07:01:32.777076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.277 ms 00:27:40.248 [2024-11-18 07:01:32.777087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.248 [2024-11-18 07:01:32.786332] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:40.248 [2024-11-18 07:01:32.787518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.248 [2024-11-18 07:01:32.787551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:40.248 [2024-11-18 07:01:32.787565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.309 ms 00:27:40.248 [2024-11-18 07:01:32.787574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.248 [2024-11-18 07:01:32.787671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.248 [2024-11-18 07:01:32.787694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:40.248 [2024-11-18 07:01:32.787705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:40.248 [2024-11-18 07:01:32.787714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.248 [2024-11-18 07:01:32.787782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.248 [2024-11-18 07:01:32.787794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:40.248 [2024-11-18 07:01:32.787807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:27:40.248 [2024-11-18 07:01:32.787816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.248 [2024-11-18 07:01:32.787846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.248 [2024-11-18 07:01:32.787856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:40.248 [2024-11-18 07:01:32.787866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:40.248 [2024-11-18 07:01:32.787875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.248 [2024-11-18 07:01:32.787915] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:40.248 [2024-11-18 07:01:32.787935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.248 [2024-11-18 07:01:32.787944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:40.248 [2024-11-18 07:01:32.787952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:40.248 [2024-11-18 07:01:32.787964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.248 [2024-11-18 07:01:32.793647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.248 [2024-11-18 07:01:32.793688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:40.248 [2024-11-18 07:01:32.793701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.638 ms 00:27:40.248 [2024-11-18 07:01:32.793711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.248 [2024-11-18 07:01:32.793819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.248 [2024-11-18 07:01:32.793830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:40.248 [2024-11-18 07:01:32.793840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:27:40.248 [2024-11-18 07:01:32.793850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.248 [2024-11-18 07:01:32.795487] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3783.914 ms, result 0 00:27:40.248 [2024-11-18 07:01:32.808291] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:40.248 [2024-11-18 07:01:32.824281] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:40.248 [2024-11-18 07:01:32.832473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:40.248 07:01:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:40.248 07:01:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:40.248 07:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:40.248 07:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:40.248 07:01:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:40.248 [2024-11-18 07:01:33.076483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.248 [2024-11-18 07:01:33.076707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:40.248 [2024-11-18 07:01:33.076777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:40.248 [2024-11-18 07:01:33.076803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.248 [2024-11-18 07:01:33.076853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.248 [2024-11-18 07:01:33.076878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:40.248 [2024-11-18 07:01:33.076900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:40.248 [2024-11-18 07:01:33.076928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.248 [2024-11-18 07:01:33.076961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.248 [2024-11-18 07:01:33.077010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:40.248 [2024-11-18 07:01:33.077030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:40.248 [2024-11-18 07:01:33.077039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.248 [2024-11-18 07:01:33.077113] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.617 ms, result 0 00:27:40.248 true 00:27:40.248 07:01:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:40.248 { 00:27:40.248 "name": "ftl", 00:27:40.248 "properties": [ 00:27:40.248 { 00:27:40.248 "name": "superblock_version", 00:27:40.248 "value": 5, 00:27:40.248 "read-only": true 00:27:40.248 }, 00:27:40.248 { 00:27:40.248 "name": "base_device", 00:27:40.248 "bands": [ 00:27:40.248 { 00:27:40.248 "id": 0, 00:27:40.248 "state": "CLOSED", 00:27:40.248 "validity": 1.0 00:27:40.248 }, 00:27:40.248 { 00:27:40.248 "id": 1, 00:27:40.248 "state": "CLOSED", 00:27:40.248 "validity": 1.0 00:27:40.248 }, 00:27:40.248 { 00:27:40.248 "id": 2, 00:27:40.248 "state": "CLOSED", 00:27:40.248 "validity": 0.007843137254901933 00:27:40.248 }, 00:27:40.248 { 00:27:40.248 "id": 3, 00:27:40.248 "state": "FREE", 00:27:40.248 "validity": 0.0 00:27:40.248 }, 00:27:40.248 { 00:27:40.248 "id": 4, 00:27:40.248 "state": "FREE", 00:27:40.248 "validity": 0.0 00:27:40.248 }, 00:27:40.248 { 00:27:40.248 "id": 5, 00:27:40.248 "state": "FREE", 00:27:40.248 "validity": 0.0 00:27:40.248 }, 00:27:40.248 { 00:27:40.248 "id": 6, 00:27:40.249 "state": "FREE", 00:27:40.249 "validity": 0.0 00:27:40.249 }, 00:27:40.249 { 00:27:40.249 "id": 7, 00:27:40.249 "state": "FREE", 00:27:40.249 "validity": 0.0 00:27:40.249 }, 00:27:40.249 { 00:27:40.249 "id": 8, 00:27:40.249 "state": "FREE", 00:27:40.249 "validity": 0.0 00:27:40.249 }, 00:27:40.249 { 00:27:40.249 "id": 9, 00:27:40.249 "state": "FREE", 00:27:40.249 "validity": 0.0 00:27:40.249 }, 00:27:40.249 { 00:27:40.249 "id": 10, 00:27:40.249 "state": "FREE", 00:27:40.249 "validity": 0.0 00:27:40.249 }, 00:27:40.249 { 00:27:40.249 "id": 11, 00:27:40.249 "state": "FREE", 00:27:40.249 "validity": 0.0 00:27:40.249 }, 00:27:40.249 { 00:27:40.249 "id": 12, 00:27:40.249 "state": "FREE", 00:27:40.249 "validity": 0.0 00:27:40.249 }, 00:27:40.249 { 00:27:40.249 "id": 13, 00:27:40.249 "state": "FREE", 00:27:40.249 "validity": 0.0 00:27:40.249 }, 00:27:40.249 { 00:27:40.249 "id": 14, 00:27:40.249 "state": "FREE", 00:27:40.249 "validity": 0.0 00:27:40.249 }, 00:27:40.249 { 00:27:40.249 "id": 15, 00:27:40.249 "state": "FREE", 00:27:40.249 "validity": 0.0 00:27:40.249 }, 00:27:40.249 { 00:27:40.249 "id": 16, 00:27:40.249 "state": "FREE", 00:27:40.249 "validity": 0.0 00:27:40.249 }, 00:27:40.249 { 00:27:40.249 "id": 17, 00:27:40.249 "state": "FREE", 00:27:40.249 "validity": 0.0 00:27:40.249 } 00:27:40.249 ], 00:27:40.249 "read-only": true 00:27:40.249 }, 00:27:40.249 { 00:27:40.249 "name": "cache_device", 00:27:40.249 "type": "bdev", 00:27:40.249 "chunks": [ 00:27:40.249 { 00:27:40.249 "id": 0, 00:27:40.249 "state": "INACTIVE", 00:27:40.249 "utilization": 0.0 00:27:40.249 }, 00:27:40.249 { 00:27:40.249 "id": 1, 00:27:40.249 "state": "OPEN", 00:27:40.249 "utilization": 0.0 00:27:40.249 }, 00:27:40.249 { 00:27:40.249 "id": 2, 00:27:40.249 "state": "OPEN", 00:27:40.249 "utilization": 0.0 00:27:40.249 }, 00:27:40.249 { 00:27:40.249 "id": 3, 00:27:40.249 "state": "FREE", 00:27:40.249 "utilization": 0.0 00:27:40.249 }, 00:27:40.249 { 00:27:40.249 "id": 4, 00:27:40.249 "state": "FREE", 00:27:40.249 "utilization": 0.0 00:27:40.249 } 00:27:40.249 ], 00:27:40.249 "read-only": true 00:27:40.249 }, 00:27:40.249 { 00:27:40.249 "name": "verbose_mode", 00:27:40.249 "value": true, 00:27:40.249 "unit": "", 00:27:40.249 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:40.249 }, 00:27:40.249 { 00:27:40.249 "name": "prep_upgrade_on_shutdown", 00:27:40.249 "value": false, 00:27:40.249 "unit": "", 00:27:40.249 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:40.249 } 00:27:40.249 ] 00:27:40.249 } 00:27:40.510 07:01:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:40.510 07:01:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:40.510 07:01:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:40.510 07:01:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:40.510 07:01:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:40.510 07:01:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:40.510 07:01:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:40.510 07:01:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:40.772 Validate MD5 checksum, iteration 1 00:27:40.773 07:01:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:40.773 07:01:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:40.773 07:01:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:40.773 07:01:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:40.773 07:01:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:40.773 07:01:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:40.773 07:01:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:40.773 07:01:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:40.773 07:01:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:40.773 07:01:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:40.773 07:01:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:40.773 07:01:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:40.773 07:01:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:40.773 [2024-11-18 07:01:33.854431] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:27:40.773 [2024-11-18 07:01:33.854568] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92261 ] 00:27:41.034 [2024-11-18 07:01:34.017180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.034 [2024-11-18 07:01:34.046667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:42.419  [2024-11-18T07:01:36.447Z] Copying: 482/1024 [MB] (482 MBps) [2024-11-18T07:01:36.707Z] Copying: 951/1024 [MB] (469 MBps) [2024-11-18T07:01:37.275Z] Copying: 1024/1024 [MB] (average 477 MBps) 00:27:44.188 00:27:44.446 07:01:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:44.446 07:01:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:46.975 07:01:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:46.975 Validate MD5 checksum, iteration 2 00:27:46.975 07:01:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ff48456bf7e2ade94f791ad39ed2263a 00:27:46.975 07:01:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ff48456bf7e2ade94f791ad39ed2263a != \f\f\4\8\4\5\6\b\f\7\e\2\a\d\e\9\4\f\7\9\1\a\d\3\9\e\d\2\2\6\3\a ]] 00:27:46.975 07:01:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:46.975 07:01:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:46.975 07:01:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:46.975 07:01:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:46.975 07:01:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:46.975 07:01:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:46.975 07:01:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:46.975 07:01:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:46.975 07:01:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:46.975 [2024-11-18 07:01:39.510568] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:27:46.975 [2024-11-18 07:01:39.510810] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92328 ] 00:27:46.975 [2024-11-18 07:01:39.669131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.975 [2024-11-18 07:01:39.688421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:48.358  [2024-11-18T07:01:42.013Z] Copying: 632/1024 [MB] (632 MBps) [2024-11-18T07:01:47.282Z] Copying: 1024/1024 [MB] (average 583 MBps) 00:27:54.195 00:27:54.195 07:01:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:54.195 07:01:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:55.692 07:01:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:55.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:55.692 07:01:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=d9533e6a9a950f4a97695671825535ea 00:27:55.692 07:01:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ d9533e6a9a950f4a97695671825535ea != \d\9\5\3\3\e\6\a\9\a\9\5\0\f\4\a\9\7\6\9\5\6\7\1\8\2\5\5\3\5\e\a ]] 00:27:55.692 07:01:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:55.692 07:01:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:55.692 07:01:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:27:55.692 07:01:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 92189 ]] 00:27:55.692 07:01:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 92189 00:27:55.692 07:01:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:27:55.692 07:01:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:27:55.692 07:01:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:55.692 07:01:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:55.692 07:01:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:55.692 07:01:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92422 00:27:55.692 07:01:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:55.692 07:01:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:55.692 07:01:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92422 00:27:55.692 07:01:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 92422 ']' 00:27:55.692 07:01:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:55.692 07:01:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:55.692 07:01:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:55.692 07:01:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:55.692 07:01:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:55.692 [2024-11-18 07:01:48.645417] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:27:55.692 [2024-11-18 07:01:48.645654] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92422 ] 00:27:55.951 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 92189 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:27:55.951 [2024-11-18 07:01:48.793844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.951 [2024-11-18 07:01:48.818231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.211 [2024-11-18 07:01:49.116117] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:56.211 [2024-11-18 07:01:49.116334] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:56.211 [2024-11-18 07:01:49.262318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.211 [2024-11-18 07:01:49.262354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:56.211 [2024-11-18 07:01:49.262367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:56.211 [2024-11-18 07:01:49.262374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.211 [2024-11-18 07:01:49.262419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.211 [2024-11-18 07:01:49.262427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:56.211 [2024-11-18 07:01:49.262435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:27:56.211 [2024-11-18 07:01:49.262441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.211 [2024-11-18 07:01:49.262458] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:56.211 [2024-11-18 07:01:49.262639] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:56.211 [2024-11-18 07:01:49.262652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.211 [2024-11-18 07:01:49.262658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:56.211 [2024-11-18 07:01:49.262664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.200 ms 00:27:56.211 [2024-11-18 07:01:49.262670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.211 [2024-11-18 07:01:49.262867] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:56.211 [2024-11-18 07:01:49.267383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.211 [2024-11-18 07:01:49.267412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:56.211 [2024-11-18 07:01:49.267424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.517 ms 00:27:56.211 [2024-11-18 07:01:49.267434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.211 [2024-11-18 07:01:49.268381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.211 [2024-11-18 07:01:49.268408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:56.211 [2024-11-18 07:01:49.268420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:27:56.211 [2024-11-18 07:01:49.268428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.211 [2024-11-18 07:01:49.268638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.211 [2024-11-18 07:01:49.268647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:56.211 [2024-11-18 07:01:49.268657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.171 ms 00:27:56.211 [2024-11-18 07:01:49.268663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.211 [2024-11-18 07:01:49.268695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.211 [2024-11-18 07:01:49.268702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:56.211 [2024-11-18 07:01:49.268710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:56.211 [2024-11-18 07:01:49.268716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.211 [2024-11-18 07:01:49.268736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.211 [2024-11-18 07:01:49.268743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:56.211 [2024-11-18 07:01:49.268753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:56.211 [2024-11-18 07:01:49.268758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.211 [2024-11-18 07:01:49.268774] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:56.211 [2024-11-18 07:01:49.269511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.211 [2024-11-18 07:01:49.269530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:56.211 [2024-11-18 07:01:49.269538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.740 ms 00:27:56.211 [2024-11-18 07:01:49.269543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.211 [2024-11-18 07:01:49.269564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.211 [2024-11-18 07:01:49.269574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:56.211 [2024-11-18 07:01:49.269580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:56.212 [2024-11-18 07:01:49.269586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.212 [2024-11-18 07:01:49.269602] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:56.212 [2024-11-18 07:01:49.269619] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:56.212 [2024-11-18 07:01:49.269646] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:56.212 [2024-11-18 07:01:49.269660] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:56.212 [2024-11-18 07:01:49.269744] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:56.212 [2024-11-18 07:01:49.269758] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:56.212 [2024-11-18 07:01:49.269767] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:56.212 [2024-11-18 07:01:49.269775] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:56.212 [2024-11-18 07:01:49.269782] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:56.212 [2024-11-18 07:01:49.269788] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:56.212 [2024-11-18 07:01:49.269794] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:56.212 [2024-11-18 07:01:49.269800] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:56.212 [2024-11-18 07:01:49.269806] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:56.212 [2024-11-18 07:01:49.269812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.212 [2024-11-18 07:01:49.269818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:56.212 [2024-11-18 07:01:49.269826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.212 ms 00:27:56.212 [2024-11-18 07:01:49.269832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.212 [2024-11-18 07:01:49.269896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.212 [2024-11-18 07:01:49.269907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:56.212 [2024-11-18 07:01:49.269915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:27:56.212 [2024-11-18 07:01:49.269920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.212 [2024-11-18 07:01:49.270012] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:56.212 [2024-11-18 07:01:49.270022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:56.212 [2024-11-18 07:01:49.270029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:56.212 [2024-11-18 07:01:49.270037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:56.212 [2024-11-18 07:01:49.270044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:56.212 [2024-11-18 07:01:49.270050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:56.212 [2024-11-18 07:01:49.270056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:56.212 [2024-11-18 07:01:49.270062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:56.212 [2024-11-18 07:01:49.270067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:56.212 [2024-11-18 07:01:49.270073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:56.212 [2024-11-18 07:01:49.270079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:56.212 [2024-11-18 07:01:49.270085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:56.212 [2024-11-18 07:01:49.270091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:56.212 [2024-11-18 07:01:49.270100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:56.212 [2024-11-18 07:01:49.270108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:56.212 [2024-11-18 07:01:49.270113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:56.212 [2024-11-18 07:01:49.270119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:56.212 [2024-11-18 07:01:49.270124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:56.212 [2024-11-18 07:01:49.270129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:56.212 [2024-11-18 07:01:49.270135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:56.212 [2024-11-18 07:01:49.270141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:56.212 [2024-11-18 07:01:49.270145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:56.212 [2024-11-18 07:01:49.270150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:56.212 [2024-11-18 07:01:49.270155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:56.212 [2024-11-18 07:01:49.270162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:56.212 [2024-11-18 07:01:49.270168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:56.212 [2024-11-18 07:01:49.270174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:56.212 [2024-11-18 07:01:49.270179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:56.212 [2024-11-18 07:01:49.270185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:56.212 [2024-11-18 07:01:49.270191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:56.212 [2024-11-18 07:01:49.270201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:56.212 [2024-11-18 07:01:49.270208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:56.212 [2024-11-18 07:01:49.270214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:56.212 [2024-11-18 07:01:49.270219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:56.212 [2024-11-18 07:01:49.270225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:56.212 [2024-11-18 07:01:49.270231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:56.212 [2024-11-18 07:01:49.270237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:56.212 [2024-11-18 07:01:49.270243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:56.212 [2024-11-18 07:01:49.270249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:56.212 [2024-11-18 07:01:49.270256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:56.212 [2024-11-18 07:01:49.270261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:56.212 [2024-11-18 07:01:49.270267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:56.212 [2024-11-18 07:01:49.270276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:56.212 [2024-11-18 07:01:49.270283] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:56.212 [2024-11-18 07:01:49.270293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:56.212 [2024-11-18 07:01:49.270301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:56.212 [2024-11-18 07:01:49.270310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:56.212 [2024-11-18 07:01:49.270317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:56.212 [2024-11-18 07:01:49.270323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:56.212 [2024-11-18 07:01:49.270329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:56.212 [2024-11-18 07:01:49.270335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:56.212 [2024-11-18 07:01:49.270341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:56.212 [2024-11-18 07:01:49.270347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:56.212 [2024-11-18 07:01:49.270354] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:56.212 [2024-11-18 07:01:49.270363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:56.212 [2024-11-18 07:01:49.270370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:56.212 [2024-11-18 07:01:49.270376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:56.212 [2024-11-18 07:01:49.270383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:56.212 [2024-11-18 07:01:49.270389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:56.212 [2024-11-18 07:01:49.270395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:56.212 [2024-11-18 07:01:49.270401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:56.212 [2024-11-18 07:01:49.270407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:56.212 [2024-11-18 07:01:49.270415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:56.212 [2024-11-18 07:01:49.270422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:56.212 [2024-11-18 07:01:49.270428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:56.212 [2024-11-18 07:01:49.270434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:56.212 [2024-11-18 07:01:49.270440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:56.212 [2024-11-18 07:01:49.270446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:56.212 [2024-11-18 07:01:49.270453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:56.212 [2024-11-18 07:01:49.270459] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:56.213 [2024-11-18 07:01:49.270465] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:56.213 [2024-11-18 07:01:49.270472] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:56.213 [2024-11-18 07:01:49.270479] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:56.213 [2024-11-18 07:01:49.270485] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:56.213 [2024-11-18 07:01:49.270493] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:56.213 [2024-11-18 07:01:49.270500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.213 [2024-11-18 07:01:49.270507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:56.213 [2024-11-18 07:01:49.270515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.551 ms 00:27:56.213 [2024-11-18 07:01:49.270523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.213 [2024-11-18 07:01:49.279015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.213 [2024-11-18 07:01:49.279112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:56.213 [2024-11-18 07:01:49.279155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.443 ms 00:27:56.213 [2024-11-18 07:01:49.279173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.213 [2024-11-18 07:01:49.279216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.213 [2024-11-18 07:01:49.279233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:56.213 [2024-11-18 07:01:49.279248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:56.213 [2024-11-18 07:01:49.279265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.213 [2024-11-18 07:01:49.289130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.213 [2024-11-18 07:01:49.289229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:56.213 [2024-11-18 07:01:49.289266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.816 ms 00:27:56.213 [2024-11-18 07:01:49.289283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.213 [2024-11-18 07:01:49.289321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.213 [2024-11-18 07:01:49.289338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:56.213 [2024-11-18 07:01:49.289357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:56.213 [2024-11-18 07:01:49.289371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.213 [2024-11-18 07:01:49.289455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.213 [2024-11-18 07:01:49.289478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:56.213 [2024-11-18 07:01:49.289495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:27:56.213 [2024-11-18 07:01:49.289660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.213 [2024-11-18 07:01:49.289709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.213 [2024-11-18 07:01:49.289732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:56.213 [2024-11-18 07:01:49.289754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:27:56.213 [2024-11-18 07:01:49.289771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.473 [2024-11-18 07:01:49.296370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.473 [2024-11-18 07:01:49.296653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:56.473 [2024-11-18 07:01:49.296873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.567 ms 00:27:56.473 [2024-11-18 07:01:49.296895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.473 [2024-11-18 07:01:49.297008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.473 [2024-11-18 07:01:49.297032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:27:56.473 [2024-11-18 07:01:49.297041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:56.473 [2024-11-18 07:01:49.297050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.473 [2024-11-18 07:01:49.312401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.473 [2024-11-18 07:01:49.312442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:27:56.473 [2024-11-18 07:01:49.312457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.335 ms 00:27:56.473 [2024-11-18 07:01:49.312467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.473 [2024-11-18 07:01:49.313949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.473 [2024-11-18 07:01:49.313993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:56.473 [2024-11-18 07:01:49.314005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.323 ms 00:27:56.473 [2024-11-18 07:01:49.314018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.473 [2024-11-18 07:01:49.332605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.473 [2024-11-18 07:01:49.332632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:56.473 [2024-11-18 07:01:49.332645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.558 ms 00:27:56.473 [2024-11-18 07:01:49.332652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.473 [2024-11-18 07:01:49.332758] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:27:56.473 [2024-11-18 07:01:49.332850] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:27:56.473 [2024-11-18 07:01:49.332936] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:27:56.473 [2024-11-18 07:01:49.333037] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:27:56.473 [2024-11-18 07:01:49.333045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.473 [2024-11-18 07:01:49.333052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:27:56.473 [2024-11-18 07:01:49.333059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.363 ms 00:27:56.473 [2024-11-18 07:01:49.333068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.473 [2024-11-18 07:01:49.333098] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:27:56.473 [2024-11-18 07:01:49.333106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.473 [2024-11-18 07:01:49.333112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:27:56.473 [2024-11-18 07:01:49.333119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:56.473 [2024-11-18 07:01:49.333130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.473 [2024-11-18 07:01:49.336595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.473 [2024-11-18 07:01:49.336619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:27:56.473 [2024-11-18 07:01:49.336628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.445 ms 00:27:56.473 [2024-11-18 07:01:49.336637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.473 [2024-11-18 07:01:49.337119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.473 [2024-11-18 07:01:49.337143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:27:56.473 [2024-11-18 07:01:49.337152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:56.473 [2024-11-18 07:01:49.337159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.473 [2024-11-18 07:01:49.337213] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:27:56.473 [2024-11-18 07:01:49.337371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.473 [2024-11-18 07:01:49.337387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:27:56.473 [2024-11-18 07:01:49.337395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.160 ms 00:27:56.473 [2024-11-18 07:01:49.337405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.408 [2024-11-18 07:01:50.192799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.408 [2024-11-18 07:01:50.192856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:27:57.408 [2024-11-18 07:01:50.192869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 855.145 ms 00:27:57.408 [2024-11-18 07:01:50.192876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.408 [2024-11-18 07:01:50.195117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.408 [2024-11-18 07:01:50.195142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:27:57.408 [2024-11-18 07:01:50.195156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.454 ms 00:27:57.408 [2024-11-18 07:01:50.195163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.408 [2024-11-18 07:01:50.195971] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:27:57.408 [2024-11-18 07:01:50.196011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.408 [2024-11-18 07:01:50.196020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:27:57.408 [2024-11-18 07:01:50.196028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.824 ms 00:27:57.408 [2024-11-18 07:01:50.196035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.408 [2024-11-18 07:01:50.196072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.408 [2024-11-18 07:01:50.196086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:27:57.408 [2024-11-18 07:01:50.196093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:57.408 [2024-11-18 07:01:50.196099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.408 [2024-11-18 07:01:50.196125] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 858.909 ms, result 0 00:27:57.408 [2024-11-18 07:01:50.196159] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:27:57.408 [2024-11-18 07:01:50.196311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.408 [2024-11-18 07:01:50.196319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:27:57.408 [2024-11-18 07:01:50.196326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.152 ms 00:27:57.408 [2024-11-18 07:01:50.196331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.975 [2024-11-18 07:01:51.054783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.975 [2024-11-18 07:01:51.054822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:27:57.975 [2024-11-18 07:01:51.054841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 858.089 ms 00:27:57.975 [2024-11-18 07:01:51.054848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.975 [2024-11-18 07:01:51.056638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.975 [2024-11-18 07:01:51.056663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:27:57.975 [2024-11-18 07:01:51.056671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.471 ms 00:27:57.975 [2024-11-18 07:01:51.056678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.975 [2024-11-18 07:01:51.057790] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:27:57.975 [2024-11-18 07:01:51.057813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.975 [2024-11-18 07:01:51.057820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:27:57.975 [2024-11-18 07:01:51.057827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.115 ms 00:27:57.975 [2024-11-18 07:01:51.057832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.975 [2024-11-18 07:01:51.057859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.975 [2024-11-18 07:01:51.057866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:27:57.975 [2024-11-18 07:01:51.057872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:57.975 [2024-11-18 07:01:51.057877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.975 [2024-11-18 07:01:51.057905] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 861.737 ms, result 0 00:27:57.975 [2024-11-18 07:01:51.057940] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:57.975 [2024-11-18 07:01:51.057949] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:57.975 [2024-11-18 07:01:51.057956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.975 [2024-11-18 07:01:51.057969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:27:57.975 [2024-11-18 07:01:51.057987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1720.752 ms 00:27:57.975 [2024-11-18 07:01:51.057997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.975 [2024-11-18 07:01:51.058023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.975 [2024-11-18 07:01:51.058030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:27:57.975 [2024-11-18 07:01:51.058037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:57.975 [2024-11-18 07:01:51.058043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.235 [2024-11-18 07:01:51.064785] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:58.235 [2024-11-18 07:01:51.064858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.235 [2024-11-18 07:01:51.064866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:58.235 [2024-11-18 07:01:51.064876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.801 ms 00:27:58.235 [2024-11-18 07:01:51.064883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.235 [2024-11-18 07:01:51.065408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.235 [2024-11-18 07:01:51.065424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:27:58.235 [2024-11-18 07:01:51.065431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.482 ms 00:27:58.235 [2024-11-18 07:01:51.065437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.235 [2024-11-18 07:01:51.067098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.235 [2024-11-18 07:01:51.067117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:27:58.235 [2024-11-18 07:01:51.067124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.649 ms 00:27:58.235 [2024-11-18 07:01:51.067131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.235 [2024-11-18 07:01:51.067177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.235 [2024-11-18 07:01:51.067185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:27:58.235 [2024-11-18 07:01:51.067191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:58.235 [2024-11-18 07:01:51.067197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.235 [2024-11-18 07:01:51.067276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.235 [2024-11-18 07:01:51.067284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:58.235 [2024-11-18 07:01:51.067296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:58.235 [2024-11-18 07:01:51.067302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.235 [2024-11-18 07:01:51.067322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.235 [2024-11-18 07:01:51.067330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:58.235 [2024-11-18 07:01:51.067336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:58.235 [2024-11-18 07:01:51.067341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.235 [2024-11-18 07:01:51.067368] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:58.235 [2024-11-18 07:01:51.067375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.235 [2024-11-18 07:01:51.067381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:58.235 [2024-11-18 07:01:51.067387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:58.235 [2024-11-18 07:01:51.067397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.235 [2024-11-18 07:01:51.067438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.235 [2024-11-18 07:01:51.067449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:58.235 [2024-11-18 07:01:51.067455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:27:58.235 [2024-11-18 07:01:51.067461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.235 [2024-11-18 07:01:51.068294] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1805.606 ms, result 0 00:27:58.235 [2024-11-18 07:01:51.081124] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:58.235 [2024-11-18 07:01:51.097133] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:58.235 [2024-11-18 07:01:51.105229] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:58.235 07:01:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:58.235 07:01:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:58.235 07:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:58.235 07:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:58.235 07:01:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:27:58.235 07:01:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:58.235 07:01:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:58.235 Validate MD5 checksum, iteration 1 00:27:58.235 07:01:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:58.235 07:01:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:58.235 07:01:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:58.235 07:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:58.235 07:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:58.235 07:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:58.235 07:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:58.235 07:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:58.235 [2024-11-18 07:01:51.259251] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:27:58.235 [2024-11-18 07:01:51.259363] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92457 ] 00:27:58.494 [2024-11-18 07:01:51.417680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.494 [2024-11-18 07:01:51.436400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:59.880  [2024-11-18T07:01:53.911Z] Copying: 495/1024 [MB] (495 MBps) [2024-11-18T07:01:53.911Z] Copying: 1016/1024 [MB] (521 MBps) [2024-11-18T07:01:54.482Z] Copying: 1024/1024 [MB] (average 509 MBps) 00:28:01.395 00:28:01.395 07:01:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:01.395 07:01:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:03.925 07:01:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:03.925 07:01:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ff48456bf7e2ade94f791ad39ed2263a 00:28:03.925 07:01:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ff48456bf7e2ade94f791ad39ed2263a != \f\f\4\8\4\5\6\b\f\7\e\2\a\d\e\9\4\f\7\9\1\a\d\3\9\e\d\2\2\6\3\a ]] 00:28:03.925 07:01:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:03.925 07:01:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:03.925 Validate MD5 checksum, iteration 2 00:28:03.925 07:01:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:03.925 07:01:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:03.925 07:01:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:03.925 07:01:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:03.925 07:01:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:03.925 07:01:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:03.925 07:01:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:03.925 [2024-11-18 07:01:56.672607] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:28:03.925 [2024-11-18 07:01:56.672716] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92513 ] 00:28:03.925 [2024-11-18 07:01:56.826702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.925 [2024-11-18 07:01:56.845814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:05.309  [2024-11-18T07:01:58.969Z] Copying: 617/1024 [MB] (617 MBps) [2024-11-18T07:01:59.542Z] Copying: 1024/1024 [MB] (average 577 MBps) 00:28:06.455 00:28:06.455 07:01:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:06.455 07:01:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:08.360 07:02:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:08.360 07:02:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=d9533e6a9a950f4a97695671825535ea 00:28:08.360 07:02:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ d9533e6a9a950f4a97695671825535ea != \d\9\5\3\3\e\6\a\9\a\9\5\0\f\4\a\9\7\6\9\5\6\7\1\8\2\5\5\3\5\e\a ]] 00:28:08.360 07:02:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:08.360 07:02:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:08.360 07:02:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:08.360 07:02:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:08.360 07:02:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:08.360 07:02:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:08.360 07:02:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:08.360 07:02:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:08.360 07:02:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:08.360 07:02:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:08.360 07:02:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 92422 ]] 00:28:08.360 07:02:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 92422 00:28:08.360 07:02:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 92422 ']' 00:28:08.360 07:02:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 92422 00:28:08.360 07:02:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:28:08.360 07:02:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:08.360 07:02:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92422 00:28:08.360 killing process with pid 92422 00:28:08.360 07:02:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:08.360 07:02:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:08.360 07:02:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92422' 00:28:08.360 07:02:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 92422 00:28:08.360 07:02:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 92422 00:28:08.360 [2024-11-18 07:02:01.187430] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:08.360 [2024-11-18 07:02:01.193287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.360 [2024-11-18 07:02:01.193322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:08.360 [2024-11-18 07:02:01.193333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:08.360 [2024-11-18 07:02:01.193341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.360 [2024-11-18 07:02:01.193359] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:08.360 [2024-11-18 07:02:01.193868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.360 [2024-11-18 07:02:01.193887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:08.360 [2024-11-18 07:02:01.193898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.499 ms 00:28:08.360 [2024-11-18 07:02:01.193905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.360 [2024-11-18 07:02:01.194080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.360 [2024-11-18 07:02:01.194094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:08.360 [2024-11-18 07:02:01.194103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.160 ms 00:28:08.360 [2024-11-18 07:02:01.194110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.360 [2024-11-18 07:02:01.195470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.360 [2024-11-18 07:02:01.195493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:08.360 [2024-11-18 07:02:01.195500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.348 ms 00:28:08.360 [2024-11-18 07:02:01.195506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.360 [2024-11-18 07:02:01.196364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.360 [2024-11-18 07:02:01.196510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:08.360 [2024-11-18 07:02:01.196523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.829 ms 00:28:08.360 [2024-11-18 07:02:01.196529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.360 [2024-11-18 07:02:01.199100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.360 [2024-11-18 07:02:01.199128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:08.360 [2024-11-18 07:02:01.199136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.542 ms 00:28:08.360 [2024-11-18 07:02:01.199147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.360 [2024-11-18 07:02:01.200634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.360 [2024-11-18 07:02:01.200662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:08.360 [2024-11-18 07:02:01.200670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.450 ms 00:28:08.360 [2024-11-18 07:02:01.200677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.361 [2024-11-18 07:02:01.200734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.361 [2024-11-18 07:02:01.200741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:08.361 [2024-11-18 07:02:01.200748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:28:08.361 [2024-11-18 07:02:01.200754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.361 [2024-11-18 07:02:01.202531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.361 [2024-11-18 07:02:01.202555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:08.361 [2024-11-18 07:02:01.202562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.760 ms 00:28:08.361 [2024-11-18 07:02:01.202567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.361 [2024-11-18 07:02:01.204589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.361 [2024-11-18 07:02:01.204616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:08.361 [2024-11-18 07:02:01.204623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.996 ms 00:28:08.361 [2024-11-18 07:02:01.204628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.361 [2024-11-18 07:02:01.206104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.361 [2024-11-18 07:02:01.206128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:08.361 [2024-11-18 07:02:01.206136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.450 ms 00:28:08.361 [2024-11-18 07:02:01.206141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.361 [2024-11-18 07:02:01.207884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.361 [2024-11-18 07:02:01.207993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:08.361 [2024-11-18 07:02:01.208006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.697 ms 00:28:08.361 [2024-11-18 07:02:01.208012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.361 [2024-11-18 07:02:01.208035] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:08.361 [2024-11-18 07:02:01.208047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:08.361 [2024-11-18 07:02:01.208055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:08.361 [2024-11-18 07:02:01.208062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:08.361 [2024-11-18 07:02:01.208069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:08.361 [2024-11-18 07:02:01.208075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:08.361 [2024-11-18 07:02:01.208081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:08.361 [2024-11-18 07:02:01.208087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:08.361 [2024-11-18 07:02:01.208094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:08.361 [2024-11-18 07:02:01.208099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:08.361 [2024-11-18 07:02:01.208105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:08.361 [2024-11-18 07:02:01.208111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:08.361 [2024-11-18 07:02:01.208117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:08.361 [2024-11-18 07:02:01.208123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:08.361 [2024-11-18 07:02:01.208128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:08.361 [2024-11-18 07:02:01.208135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:08.361 [2024-11-18 07:02:01.208141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:08.361 [2024-11-18 07:02:01.208146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:08.361 [2024-11-18 07:02:01.208152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:08.361 [2024-11-18 07:02:01.208160] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:08.361 [2024-11-18 07:02:01.208166] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: f81c9f16-3d43-431e-b00e-49c2eed8fcc9 00:28:08.361 [2024-11-18 07:02:01.208173] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:08.361 [2024-11-18 07:02:01.208178] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:08.361 [2024-11-18 07:02:01.208184] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:08.361 [2024-11-18 07:02:01.208190] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:08.361 [2024-11-18 07:02:01.208196] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:08.361 [2024-11-18 07:02:01.208202] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:08.361 [2024-11-18 07:02:01.208207] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:08.361 [2024-11-18 07:02:01.208212] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:08.361 [2024-11-18 07:02:01.208218] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:08.361 [2024-11-18 07:02:01.208225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.361 [2024-11-18 07:02:01.208237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:08.361 [2024-11-18 07:02:01.208244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.192 ms 00:28:08.361 [2024-11-18 07:02:01.208251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.361 [2024-11-18 07:02:01.209904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.361 [2024-11-18 07:02:01.209929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:08.361 [2024-11-18 07:02:01.209937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.641 ms 00:28:08.361 [2024-11-18 07:02:01.209943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.361 [2024-11-18 07:02:01.210044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.361 [2024-11-18 07:02:01.210055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:08.361 [2024-11-18 07:02:01.210063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.087 ms 00:28:08.361 [2024-11-18 07:02:01.210070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.361 [2024-11-18 07:02:01.216044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.361 [2024-11-18 07:02:01.216069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:08.361 [2024-11-18 07:02:01.216077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.361 [2024-11-18 07:02:01.216084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.361 [2024-11-18 07:02:01.216112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.361 [2024-11-18 07:02:01.216120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:08.361 [2024-11-18 07:02:01.216126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.361 [2024-11-18 07:02:01.216132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.361 [2024-11-18 07:02:01.216181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.361 [2024-11-18 07:02:01.216189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:08.361 [2024-11-18 07:02:01.216196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.361 [2024-11-18 07:02:01.216202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.361 [2024-11-18 07:02:01.216223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.361 [2024-11-18 07:02:01.216232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:08.361 [2024-11-18 07:02:01.216239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.361 [2024-11-18 07:02:01.216246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.361 [2024-11-18 07:02:01.227332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.361 [2024-11-18 07:02:01.227364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:08.361 [2024-11-18 07:02:01.227373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.361 [2024-11-18 07:02:01.227385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.361 [2024-11-18 07:02:01.235740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.361 [2024-11-18 07:02:01.235772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:08.361 [2024-11-18 07:02:01.235781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.361 [2024-11-18 07:02:01.235788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.361 [2024-11-18 07:02:01.235851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.361 [2024-11-18 07:02:01.235858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:08.361 [2024-11-18 07:02:01.235865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.361 [2024-11-18 07:02:01.235872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.361 [2024-11-18 07:02:01.235903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.361 [2024-11-18 07:02:01.235912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:08.361 [2024-11-18 07:02:01.235921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.361 [2024-11-18 07:02:01.235928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.361 [2024-11-18 07:02:01.236001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.361 [2024-11-18 07:02:01.236010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:08.361 [2024-11-18 07:02:01.236016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.361 [2024-11-18 07:02:01.236022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.361 [2024-11-18 07:02:01.236050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.361 [2024-11-18 07:02:01.236057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:08.361 [2024-11-18 07:02:01.236064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.361 [2024-11-18 07:02:01.236072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.361 [2024-11-18 07:02:01.236110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.361 [2024-11-18 07:02:01.236121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:08.361 [2024-11-18 07:02:01.236128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.361 [2024-11-18 07:02:01.236134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.361 [2024-11-18 07:02:01.236179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.361 [2024-11-18 07:02:01.236187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:08.361 [2024-11-18 07:02:01.236196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.361 [2024-11-18 07:02:01.236203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.361 [2024-11-18 07:02:01.236309] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 42.994 ms, result 0 00:28:08.361 07:02:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:08.361 07:02:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:08.361 07:02:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:08.361 07:02:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:08.361 07:02:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:08.361 07:02:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:08.361 Remove shared memory files 00:28:08.361 07:02:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:08.361 07:02:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:08.361 07:02:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:08.361 07:02:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:08.361 07:02:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid92189 00:28:08.361 07:02:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:08.361 07:02:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:08.361 00:28:08.361 real 1m14.729s 00:28:08.361 user 1m38.397s 00:28:08.361 sys 0m20.444s 00:28:08.361 07:02:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:08.361 07:02:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:08.361 ************************************ 00:28:08.361 END TEST ftl_upgrade_shutdown 00:28:08.361 ************************************ 00:28:08.623 07:02:01 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:28:08.623 07:02:01 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:08.623 07:02:01 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:28:08.623 07:02:01 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:08.623 07:02:01 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:08.623 ************************************ 00:28:08.623 START TEST ftl_restore_fast 00:28:08.623 ************************************ 00:28:08.623 07:02:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:08.623 * Looking for test storage... 00:28:08.623 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:08.623 07:02:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:08.623 07:02:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:28:08.623 07:02:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:08.623 07:02:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:08.623 07:02:01 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:08.623 07:02:01 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:08.623 07:02:01 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:08.623 07:02:01 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:28:08.623 07:02:01 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:28:08.623 07:02:01 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:28:08.623 07:02:01 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:28:08.623 07:02:01 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:28:08.623 07:02:01 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:28:08.623 07:02:01 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:28:08.623 07:02:01 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:08.623 07:02:01 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:28:08.623 07:02:01 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:28:08.623 07:02:01 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:08.623 07:02:01 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:08.623 07:02:01 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:28:08.623 07:02:01 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:28:08.623 07:02:01 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:08.623 07:02:01 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:28:08.623 07:02:01 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:28:08.623 07:02:01 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:28:08.623 07:02:01 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:28:08.623 07:02:01 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:08.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.624 --rc genhtml_branch_coverage=1 00:28:08.624 --rc genhtml_function_coverage=1 00:28:08.624 --rc genhtml_legend=1 00:28:08.624 --rc geninfo_all_blocks=1 00:28:08.624 --rc geninfo_unexecuted_blocks=1 00:28:08.624 00:28:08.624 ' 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:08.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.624 --rc genhtml_branch_coverage=1 00:28:08.624 --rc genhtml_function_coverage=1 00:28:08.624 --rc genhtml_legend=1 00:28:08.624 --rc geninfo_all_blocks=1 00:28:08.624 --rc geninfo_unexecuted_blocks=1 00:28:08.624 00:28:08.624 ' 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:08.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.624 --rc genhtml_branch_coverage=1 00:28:08.624 --rc genhtml_function_coverage=1 00:28:08.624 --rc genhtml_legend=1 00:28:08.624 --rc geninfo_all_blocks=1 00:28:08.624 --rc geninfo_unexecuted_blocks=1 00:28:08.624 00:28:08.624 ' 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:08.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.624 --rc genhtml_branch_coverage=1 00:28:08.624 --rc genhtml_function_coverage=1 00:28:08.624 --rc genhtml_legend=1 00:28:08.624 --rc geninfo_all_blocks=1 00:28:08.624 --rc geninfo_unexecuted_blocks=1 00:28:08.624 00:28:08.624 ' 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.fbe8WGyAvF 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=92650 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 92650 00:28:08.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 92650 ']' 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:28:08.624 07:02:01 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:08.624 [2024-11-18 07:02:01.706303] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:28:08.624 [2024-11-18 07:02:01.706419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92650 ] 00:28:08.884 [2024-11-18 07:02:01.860873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.884 [2024-11-18 07:02:01.884060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.451 07:02:02 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:09.451 07:02:02 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:28:09.709 07:02:02 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:09.709 07:02:02 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:28:09.709 07:02:02 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:09.709 07:02:02 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:28:09.709 07:02:02 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:28:09.709 07:02:02 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:09.968 07:02:02 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:09.968 07:02:02 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:28:09.968 07:02:02 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:09.968 07:02:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:28:09.968 07:02:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:09.968 07:02:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:28:09.968 07:02:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:28:09.968 07:02:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:09.968 07:02:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:09.968 { 00:28:09.968 "name": "nvme0n1", 00:28:09.968 "aliases": [ 00:28:09.968 "10c4024b-26f1-4daa-a5e9-866958ff202a" 00:28:09.968 ], 00:28:09.968 "product_name": "NVMe disk", 00:28:09.968 "block_size": 4096, 00:28:09.968 "num_blocks": 1310720, 00:28:09.968 "uuid": "10c4024b-26f1-4daa-a5e9-866958ff202a", 00:28:09.968 "numa_id": -1, 00:28:09.968 "assigned_rate_limits": { 00:28:09.968 "rw_ios_per_sec": 0, 00:28:09.968 "rw_mbytes_per_sec": 0, 00:28:09.968 "r_mbytes_per_sec": 0, 00:28:09.968 "w_mbytes_per_sec": 0 00:28:09.968 }, 00:28:09.968 "claimed": true, 00:28:09.968 "claim_type": "read_many_write_one", 00:28:09.968 "zoned": false, 00:28:09.968 "supported_io_types": { 00:28:09.968 "read": true, 00:28:09.968 "write": true, 00:28:09.968 "unmap": true, 00:28:09.968 "flush": true, 00:28:09.968 "reset": true, 00:28:09.968 "nvme_admin": true, 00:28:09.968 "nvme_io": true, 00:28:09.968 "nvme_io_md": false, 00:28:09.968 "write_zeroes": true, 00:28:09.968 "zcopy": false, 00:28:09.968 "get_zone_info": false, 00:28:09.968 "zone_management": false, 00:28:09.968 "zone_append": false, 00:28:09.968 "compare": true, 00:28:09.968 "compare_and_write": false, 00:28:09.968 "abort": true, 00:28:09.968 "seek_hole": false, 00:28:09.968 "seek_data": false, 00:28:09.968 "copy": true, 00:28:09.968 "nvme_iov_md": false 00:28:09.968 }, 00:28:09.968 "driver_specific": { 00:28:09.968 "nvme": [ 00:28:09.968 { 00:28:09.968 "pci_address": "0000:00:11.0", 00:28:09.968 "trid": { 00:28:09.968 "trtype": "PCIe", 00:28:09.968 "traddr": "0000:00:11.0" 00:28:09.968 }, 00:28:09.968 "ctrlr_data": { 00:28:09.968 "cntlid": 0, 00:28:09.968 "vendor_id": "0x1b36", 00:28:09.968 "model_number": "QEMU NVMe Ctrl", 00:28:09.968 "serial_number": "12341", 00:28:09.968 "firmware_revision": "8.0.0", 00:28:09.968 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:09.968 "oacs": { 00:28:09.968 "security": 0, 00:28:09.968 "format": 1, 00:28:09.968 "firmware": 0, 00:28:09.968 "ns_manage": 1 00:28:09.968 }, 00:28:09.968 "multi_ctrlr": false, 00:28:09.968 "ana_reporting": false 00:28:09.968 }, 00:28:09.968 "vs": { 00:28:09.968 "nvme_version": "1.4" 00:28:09.968 }, 00:28:09.968 "ns_data": { 00:28:09.968 "id": 1, 00:28:09.968 "can_share": false 00:28:09.968 } 00:28:09.968 } 00:28:09.968 ], 00:28:09.968 "mp_policy": "active_passive" 00:28:09.968 } 00:28:09.968 } 00:28:09.968 ]' 00:28:09.968 07:02:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:09.968 07:02:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:28:09.968 07:02:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:10.227 07:02:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:28:10.227 07:02:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:28:10.227 07:02:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:28:10.227 07:02:03 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:28:10.227 07:02:03 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:10.227 07:02:03 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:28:10.227 07:02:03 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:10.227 07:02:03 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:10.227 07:02:03 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=a9108748-7591-4fa3-a3c4-16f4d99abbfe 00:28:10.227 07:02:03 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:28:10.227 07:02:03 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a9108748-7591-4fa3-a3c4-16f4d99abbfe 00:28:10.485 07:02:03 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:10.743 07:02:03 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=b9d2a40d-7b0d-4880-9ff1-93f106986b36 00:28:10.743 07:02:03 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b9d2a40d-7b0d-4880-9ff1-93f106986b36 00:28:11.002 07:02:03 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=fa5bdd0c-7734-4009-8f6f-5695e060c0fa 00:28:11.002 07:02:03 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:28:11.002 07:02:03 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 fa5bdd0c-7734-4009-8f6f-5695e060c0fa 00:28:11.002 07:02:03 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:28:11.002 07:02:03 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:11.002 07:02:03 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=fa5bdd0c-7734-4009-8f6f-5695e060c0fa 00:28:11.002 07:02:03 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:28:11.002 07:02:03 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size fa5bdd0c-7734-4009-8f6f-5695e060c0fa 00:28:11.002 07:02:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=fa5bdd0c-7734-4009-8f6f-5695e060c0fa 00:28:11.002 07:02:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:11.002 07:02:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:28:11.002 07:02:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:28:11.002 07:02:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fa5bdd0c-7734-4009-8f6f-5695e060c0fa 00:28:11.002 07:02:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:11.002 { 00:28:11.002 "name": "fa5bdd0c-7734-4009-8f6f-5695e060c0fa", 00:28:11.002 "aliases": [ 00:28:11.002 "lvs/nvme0n1p0" 00:28:11.002 ], 00:28:11.002 "product_name": "Logical Volume", 00:28:11.002 "block_size": 4096, 00:28:11.002 "num_blocks": 26476544, 00:28:11.002 "uuid": "fa5bdd0c-7734-4009-8f6f-5695e060c0fa", 00:28:11.002 "assigned_rate_limits": { 00:28:11.002 "rw_ios_per_sec": 0, 00:28:11.002 "rw_mbytes_per_sec": 0, 00:28:11.002 "r_mbytes_per_sec": 0, 00:28:11.002 "w_mbytes_per_sec": 0 00:28:11.002 }, 00:28:11.002 "claimed": false, 00:28:11.002 "zoned": false, 00:28:11.002 "supported_io_types": { 00:28:11.002 "read": true, 00:28:11.002 "write": true, 00:28:11.002 "unmap": true, 00:28:11.002 "flush": false, 00:28:11.002 "reset": true, 00:28:11.002 "nvme_admin": false, 00:28:11.002 "nvme_io": false, 00:28:11.002 "nvme_io_md": false, 00:28:11.002 "write_zeroes": true, 00:28:11.002 "zcopy": false, 00:28:11.002 "get_zone_info": false, 00:28:11.002 "zone_management": false, 00:28:11.002 "zone_append": false, 00:28:11.002 "compare": false, 00:28:11.002 "compare_and_write": false, 00:28:11.002 "abort": false, 00:28:11.002 "seek_hole": true, 00:28:11.002 "seek_data": true, 00:28:11.002 "copy": false, 00:28:11.002 "nvme_iov_md": false 00:28:11.002 }, 00:28:11.002 "driver_specific": { 00:28:11.002 "lvol": { 00:28:11.002 "lvol_store_uuid": "b9d2a40d-7b0d-4880-9ff1-93f106986b36", 00:28:11.002 "base_bdev": "nvme0n1", 00:28:11.002 "thin_provision": true, 00:28:11.002 "num_allocated_clusters": 0, 00:28:11.002 "snapshot": false, 00:28:11.002 "clone": false, 00:28:11.002 "esnap_clone": false 00:28:11.002 } 00:28:11.002 } 00:28:11.002 } 00:28:11.002 ]' 00:28:11.002 07:02:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:11.261 07:02:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:28:11.261 07:02:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:11.261 07:02:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:11.261 07:02:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:11.261 07:02:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:28:11.261 07:02:04 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:28:11.261 07:02:04 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:28:11.261 07:02:04 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:11.519 07:02:04 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:11.519 07:02:04 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:11.519 07:02:04 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size fa5bdd0c-7734-4009-8f6f-5695e060c0fa 00:28:11.519 07:02:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=fa5bdd0c-7734-4009-8f6f-5695e060c0fa 00:28:11.519 07:02:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:11.519 07:02:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:28:11.519 07:02:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:28:11.519 07:02:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fa5bdd0c-7734-4009-8f6f-5695e060c0fa 00:28:11.777 07:02:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:11.777 { 00:28:11.777 "name": "fa5bdd0c-7734-4009-8f6f-5695e060c0fa", 00:28:11.777 "aliases": [ 00:28:11.777 "lvs/nvme0n1p0" 00:28:11.777 ], 00:28:11.777 "product_name": "Logical Volume", 00:28:11.777 "block_size": 4096, 00:28:11.777 "num_blocks": 26476544, 00:28:11.777 "uuid": "fa5bdd0c-7734-4009-8f6f-5695e060c0fa", 00:28:11.777 "assigned_rate_limits": { 00:28:11.777 "rw_ios_per_sec": 0, 00:28:11.777 "rw_mbytes_per_sec": 0, 00:28:11.777 "r_mbytes_per_sec": 0, 00:28:11.777 "w_mbytes_per_sec": 0 00:28:11.777 }, 00:28:11.777 "claimed": false, 00:28:11.777 "zoned": false, 00:28:11.777 "supported_io_types": { 00:28:11.777 "read": true, 00:28:11.777 "write": true, 00:28:11.777 "unmap": true, 00:28:11.777 "flush": false, 00:28:11.777 "reset": true, 00:28:11.777 "nvme_admin": false, 00:28:11.777 "nvme_io": false, 00:28:11.777 "nvme_io_md": false, 00:28:11.777 "write_zeroes": true, 00:28:11.777 "zcopy": false, 00:28:11.777 "get_zone_info": false, 00:28:11.777 "zone_management": false, 00:28:11.777 "zone_append": false, 00:28:11.777 "compare": false, 00:28:11.777 "compare_and_write": false, 00:28:11.777 "abort": false, 00:28:11.777 "seek_hole": true, 00:28:11.777 "seek_data": true, 00:28:11.777 "copy": false, 00:28:11.777 "nvme_iov_md": false 00:28:11.777 }, 00:28:11.777 "driver_specific": { 00:28:11.777 "lvol": { 00:28:11.777 "lvol_store_uuid": "b9d2a40d-7b0d-4880-9ff1-93f106986b36", 00:28:11.777 "base_bdev": "nvme0n1", 00:28:11.777 "thin_provision": true, 00:28:11.777 "num_allocated_clusters": 0, 00:28:11.777 "snapshot": false, 00:28:11.777 "clone": false, 00:28:11.777 "esnap_clone": false 00:28:11.777 } 00:28:11.777 } 00:28:11.777 } 00:28:11.777 ]' 00:28:11.777 07:02:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:11.777 07:02:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:28:11.777 07:02:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:11.777 07:02:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:11.777 07:02:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:11.777 07:02:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:28:11.777 07:02:04 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:28:11.777 07:02:04 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:12.036 07:02:04 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:28:12.036 07:02:04 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size fa5bdd0c-7734-4009-8f6f-5695e060c0fa 00:28:12.036 07:02:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=fa5bdd0c-7734-4009-8f6f-5695e060c0fa 00:28:12.036 07:02:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:12.036 07:02:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:28:12.036 07:02:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:28:12.036 07:02:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fa5bdd0c-7734-4009-8f6f-5695e060c0fa 00:28:12.036 07:02:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:12.036 { 00:28:12.036 "name": "fa5bdd0c-7734-4009-8f6f-5695e060c0fa", 00:28:12.036 "aliases": [ 00:28:12.036 "lvs/nvme0n1p0" 00:28:12.036 ], 00:28:12.036 "product_name": "Logical Volume", 00:28:12.036 "block_size": 4096, 00:28:12.036 "num_blocks": 26476544, 00:28:12.036 "uuid": "fa5bdd0c-7734-4009-8f6f-5695e060c0fa", 00:28:12.036 "assigned_rate_limits": { 00:28:12.036 "rw_ios_per_sec": 0, 00:28:12.036 "rw_mbytes_per_sec": 0, 00:28:12.036 "r_mbytes_per_sec": 0, 00:28:12.036 "w_mbytes_per_sec": 0 00:28:12.036 }, 00:28:12.036 "claimed": false, 00:28:12.036 "zoned": false, 00:28:12.036 "supported_io_types": { 00:28:12.036 "read": true, 00:28:12.036 "write": true, 00:28:12.036 "unmap": true, 00:28:12.036 "flush": false, 00:28:12.036 "reset": true, 00:28:12.036 "nvme_admin": false, 00:28:12.036 "nvme_io": false, 00:28:12.036 "nvme_io_md": false, 00:28:12.036 "write_zeroes": true, 00:28:12.036 "zcopy": false, 00:28:12.036 "get_zone_info": false, 00:28:12.036 "zone_management": false, 00:28:12.036 "zone_append": false, 00:28:12.036 "compare": false, 00:28:12.036 "compare_and_write": false, 00:28:12.036 "abort": false, 00:28:12.036 "seek_hole": true, 00:28:12.036 "seek_data": true, 00:28:12.036 "copy": false, 00:28:12.036 "nvme_iov_md": false 00:28:12.036 }, 00:28:12.036 "driver_specific": { 00:28:12.036 "lvol": { 00:28:12.036 "lvol_store_uuid": "b9d2a40d-7b0d-4880-9ff1-93f106986b36", 00:28:12.036 "base_bdev": "nvme0n1", 00:28:12.036 "thin_provision": true, 00:28:12.036 "num_allocated_clusters": 0, 00:28:12.036 "snapshot": false, 00:28:12.036 "clone": false, 00:28:12.036 "esnap_clone": false 00:28:12.036 } 00:28:12.036 } 00:28:12.036 } 00:28:12.036 ]' 00:28:12.036 07:02:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:12.036 07:02:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:28:12.036 07:02:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:12.296 07:02:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:12.296 07:02:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:12.296 07:02:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:28:12.296 07:02:05 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:28:12.296 07:02:05 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d fa5bdd0c-7734-4009-8f6f-5695e060c0fa --l2p_dram_limit 10' 00:28:12.296 07:02:05 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:28:12.296 07:02:05 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:28:12.296 07:02:05 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:12.296 07:02:05 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:28:12.296 07:02:05 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:28:12.296 07:02:05 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d fa5bdd0c-7734-4009-8f6f-5695e060c0fa --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:28:12.296 [2024-11-18 07:02:05.310904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.296 [2024-11-18 07:02:05.310947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:12.296 [2024-11-18 07:02:05.310959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:12.296 [2024-11-18 07:02:05.310967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.296 [2024-11-18 07:02:05.311015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.296 [2024-11-18 07:02:05.311026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:12.296 [2024-11-18 07:02:05.311035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:28:12.296 [2024-11-18 07:02:05.311045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.296 [2024-11-18 07:02:05.311062] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:12.296 [2024-11-18 07:02:05.311242] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:12.296 [2024-11-18 07:02:05.311261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.296 [2024-11-18 07:02:05.311271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:12.296 [2024-11-18 07:02:05.311278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:28:12.296 [2024-11-18 07:02:05.311287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.296 [2024-11-18 07:02:05.311310] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ff51a709-2828-45f3-8c57-df62594139b9 00:28:12.296 [2024-11-18 07:02:05.312559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.296 [2024-11-18 07:02:05.312582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:12.296 [2024-11-18 07:02:05.312594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:28:12.296 [2024-11-18 07:02:05.312601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.296 [2024-11-18 07:02:05.319486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.296 [2024-11-18 07:02:05.319511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:12.296 [2024-11-18 07:02:05.319521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.845 ms 00:28:12.296 [2024-11-18 07:02:05.319527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.297 [2024-11-18 07:02:05.319622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.297 [2024-11-18 07:02:05.319632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:12.297 [2024-11-18 07:02:05.319640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:28:12.297 [2024-11-18 07:02:05.319646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.297 [2024-11-18 07:02:05.319688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.297 [2024-11-18 07:02:05.319697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:12.297 [2024-11-18 07:02:05.319706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:12.297 [2024-11-18 07:02:05.319713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.297 [2024-11-18 07:02:05.319730] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:12.297 [2024-11-18 07:02:05.321368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.297 [2024-11-18 07:02:05.321389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:12.297 [2024-11-18 07:02:05.321397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.643 ms 00:28:12.297 [2024-11-18 07:02:05.321404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.297 [2024-11-18 07:02:05.321431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.297 [2024-11-18 07:02:05.321439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:12.297 [2024-11-18 07:02:05.321445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:12.297 [2024-11-18 07:02:05.321455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.297 [2024-11-18 07:02:05.321468] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:12.297 [2024-11-18 07:02:05.321579] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:12.297 [2024-11-18 07:02:05.321590] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:12.297 [2024-11-18 07:02:05.321604] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:12.297 [2024-11-18 07:02:05.321613] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:12.297 [2024-11-18 07:02:05.321625] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:12.297 [2024-11-18 07:02:05.321631] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:12.297 [2024-11-18 07:02:05.321640] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:12.297 [2024-11-18 07:02:05.321646] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:12.297 [2024-11-18 07:02:05.321653] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:12.297 [2024-11-18 07:02:05.321659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.297 [2024-11-18 07:02:05.321667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:12.297 [2024-11-18 07:02:05.321673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:28:12.297 [2024-11-18 07:02:05.321680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.297 [2024-11-18 07:02:05.321744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.297 [2024-11-18 07:02:05.321755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:12.297 [2024-11-18 07:02:05.321761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:28:12.297 [2024-11-18 07:02:05.321768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.297 [2024-11-18 07:02:05.321844] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:12.297 [2024-11-18 07:02:05.321854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:12.297 [2024-11-18 07:02:05.321860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:12.297 [2024-11-18 07:02:05.321868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:12.297 [2024-11-18 07:02:05.321875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:12.297 [2024-11-18 07:02:05.321882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:12.297 [2024-11-18 07:02:05.321887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:12.297 [2024-11-18 07:02:05.321894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:12.297 [2024-11-18 07:02:05.321900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:12.297 [2024-11-18 07:02:05.321908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:12.297 [2024-11-18 07:02:05.321913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:12.297 [2024-11-18 07:02:05.321921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:12.297 [2024-11-18 07:02:05.321927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:12.297 [2024-11-18 07:02:05.321937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:12.297 [2024-11-18 07:02:05.321942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:12.297 [2024-11-18 07:02:05.321949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:12.297 [2024-11-18 07:02:05.321954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:12.297 [2024-11-18 07:02:05.321961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:12.297 [2024-11-18 07:02:05.321966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:12.297 [2024-11-18 07:02:05.321983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:12.297 [2024-11-18 07:02:05.321989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:12.297 [2024-11-18 07:02:05.321995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:12.297 [2024-11-18 07:02:05.322000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:12.297 [2024-11-18 07:02:05.322007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:12.297 [2024-11-18 07:02:05.322013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:12.297 [2024-11-18 07:02:05.322021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:12.297 [2024-11-18 07:02:05.322028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:12.297 [2024-11-18 07:02:05.322035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:12.297 [2024-11-18 07:02:05.322041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:12.297 [2024-11-18 07:02:05.322050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:12.297 [2024-11-18 07:02:05.322056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:12.297 [2024-11-18 07:02:05.322064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:12.297 [2024-11-18 07:02:05.322070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:12.297 [2024-11-18 07:02:05.322077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:12.297 [2024-11-18 07:02:05.322083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:12.297 [2024-11-18 07:02:05.322092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:12.297 [2024-11-18 07:02:05.322098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:12.297 [2024-11-18 07:02:05.322106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:12.297 [2024-11-18 07:02:05.322113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:12.297 [2024-11-18 07:02:05.322121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:12.297 [2024-11-18 07:02:05.322126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:12.297 [2024-11-18 07:02:05.322135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:12.297 [2024-11-18 07:02:05.322141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:12.297 [2024-11-18 07:02:05.322150] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:12.297 [2024-11-18 07:02:05.322160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:12.297 [2024-11-18 07:02:05.322170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:12.297 [2024-11-18 07:02:05.322176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:12.297 [2024-11-18 07:02:05.322185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:12.297 [2024-11-18 07:02:05.322197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:12.297 [2024-11-18 07:02:05.322205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:12.297 [2024-11-18 07:02:05.322211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:12.297 [2024-11-18 07:02:05.322219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:12.297 [2024-11-18 07:02:05.322226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:12.297 [2024-11-18 07:02:05.322236] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:12.297 [2024-11-18 07:02:05.322247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:12.297 [2024-11-18 07:02:05.322257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:12.297 [2024-11-18 07:02:05.322264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:12.297 [2024-11-18 07:02:05.322271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:12.297 [2024-11-18 07:02:05.322278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:12.297 [2024-11-18 07:02:05.322285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:12.297 [2024-11-18 07:02:05.322292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:12.297 [2024-11-18 07:02:05.322302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:12.297 [2024-11-18 07:02:05.322308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:12.297 [2024-11-18 07:02:05.322317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:12.297 [2024-11-18 07:02:05.322324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:12.298 [2024-11-18 07:02:05.322332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:12.298 [2024-11-18 07:02:05.322338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:12.298 [2024-11-18 07:02:05.322347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:12.298 [2024-11-18 07:02:05.322353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:12.298 [2024-11-18 07:02:05.322361] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:12.298 [2024-11-18 07:02:05.322368] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:12.298 [2024-11-18 07:02:05.322378] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:12.298 [2024-11-18 07:02:05.322385] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:12.298 [2024-11-18 07:02:05.322394] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:12.298 [2024-11-18 07:02:05.322400] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:12.298 [2024-11-18 07:02:05.322409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.298 [2024-11-18 07:02:05.322415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:12.298 [2024-11-18 07:02:05.322423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.614 ms 00:28:12.298 [2024-11-18 07:02:05.322429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.298 [2024-11-18 07:02:05.322460] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:12.298 [2024-11-18 07:02:05.322468] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:16.503 [2024-11-18 07:02:09.232897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.503 [2024-11-18 07:02:09.233018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:16.503 [2024-11-18 07:02:09.233041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3910.408 ms 00:28:16.503 [2024-11-18 07:02:09.233052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.503 [2024-11-18 07:02:09.252157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.503 [2024-11-18 07:02:09.252218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:16.503 [2024-11-18 07:02:09.252237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.970 ms 00:28:16.503 [2024-11-18 07:02:09.252248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.503 [2024-11-18 07:02:09.252384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.503 [2024-11-18 07:02:09.252404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:16.503 [2024-11-18 07:02:09.252418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:28:16.503 [2024-11-18 07:02:09.252430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.503 [2024-11-18 07:02:09.269541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.503 [2024-11-18 07:02:09.269597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:16.503 [2024-11-18 07:02:09.269613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.062 ms 00:28:16.503 [2024-11-18 07:02:09.269622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.503 [2024-11-18 07:02:09.269665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.503 [2024-11-18 07:02:09.269674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:16.503 [2024-11-18 07:02:09.269687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:16.503 [2024-11-18 07:02:09.269696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.503 [2024-11-18 07:02:09.270446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.503 [2024-11-18 07:02:09.270491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:16.503 [2024-11-18 07:02:09.270507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.690 ms 00:28:16.503 [2024-11-18 07:02:09.270517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.503 [2024-11-18 07:02:09.270645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.503 [2024-11-18 07:02:09.270658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:16.503 [2024-11-18 07:02:09.270670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:28:16.503 [2024-11-18 07:02:09.270679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.503 [2024-11-18 07:02:09.282192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.503 [2024-11-18 07:02:09.282244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:16.503 [2024-11-18 07:02:09.282260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.488 ms 00:28:16.503 [2024-11-18 07:02:09.282270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.503 [2024-11-18 07:02:09.293650] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:16.503 [2024-11-18 07:02:09.298644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.503 [2024-11-18 07:02:09.298693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:16.503 [2024-11-18 07:02:09.298705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.264 ms 00:28:16.503 [2024-11-18 07:02:09.298716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.503 [2024-11-18 07:02:09.400282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.503 [2024-11-18 07:02:09.400350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:16.503 [2024-11-18 07:02:09.400368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.530 ms 00:28:16.503 [2024-11-18 07:02:09.400384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.503 [2024-11-18 07:02:09.400607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.503 [2024-11-18 07:02:09.400632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:16.503 [2024-11-18 07:02:09.400642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:28:16.503 [2024-11-18 07:02:09.400654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.503 [2024-11-18 07:02:09.406633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.503 [2024-11-18 07:02:09.406688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:16.503 [2024-11-18 07:02:09.406700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.957 ms 00:28:16.503 [2024-11-18 07:02:09.406715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.503 [2024-11-18 07:02:09.411675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.503 [2024-11-18 07:02:09.411727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:16.503 [2024-11-18 07:02:09.411739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.911 ms 00:28:16.503 [2024-11-18 07:02:09.411750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.503 [2024-11-18 07:02:09.412128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.503 [2024-11-18 07:02:09.412147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:16.503 [2024-11-18 07:02:09.412158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:28:16.503 [2024-11-18 07:02:09.412171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.503 [2024-11-18 07:02:09.459680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.503 [2024-11-18 07:02:09.459738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:16.503 [2024-11-18 07:02:09.459751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.485 ms 00:28:16.503 [2024-11-18 07:02:09.459767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.503 [2024-11-18 07:02:09.467576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.503 [2024-11-18 07:02:09.467631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:16.503 [2024-11-18 07:02:09.467644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.721 ms 00:28:16.503 [2024-11-18 07:02:09.467655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.503 [2024-11-18 07:02:09.473215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.503 [2024-11-18 07:02:09.473267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:16.503 [2024-11-18 07:02:09.473279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.509 ms 00:28:16.503 [2024-11-18 07:02:09.473290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.503 [2024-11-18 07:02:09.479359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.503 [2024-11-18 07:02:09.479412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:16.503 [2024-11-18 07:02:09.479423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.023 ms 00:28:16.503 [2024-11-18 07:02:09.479437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.503 [2024-11-18 07:02:09.479489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.503 [2024-11-18 07:02:09.479502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:16.503 [2024-11-18 07:02:09.479513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:16.503 [2024-11-18 07:02:09.479532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.503 [2024-11-18 07:02:09.479614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.503 [2024-11-18 07:02:09.479632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:16.503 [2024-11-18 07:02:09.479644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:28:16.503 [2024-11-18 07:02:09.479655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.503 [2024-11-18 07:02:09.481013] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4169.520 ms, result 0 00:28:16.503 { 00:28:16.503 "name": "ftl0", 00:28:16.503 "uuid": "ff51a709-2828-45f3-8c57-df62594139b9" 00:28:16.503 } 00:28:16.503 07:02:09 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:28:16.503 07:02:09 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:16.764 07:02:09 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:28:16.764 07:02:09 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:17.028 [2024-11-18 07:02:09.922834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.028 [2024-11-18 07:02:09.922903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:17.028 [2024-11-18 07:02:09.922920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:17.028 [2024-11-18 07:02:09.922932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.028 [2024-11-18 07:02:09.922960] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:17.028 [2024-11-18 07:02:09.923920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.028 [2024-11-18 07:02:09.923970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:17.028 [2024-11-18 07:02:09.923995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.929 ms 00:28:17.028 [2024-11-18 07:02:09.924008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.028 [2024-11-18 07:02:09.924275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.028 [2024-11-18 07:02:09.924297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:17.028 [2024-11-18 07:02:09.924308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:28:17.028 [2024-11-18 07:02:09.924326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.028 [2024-11-18 07:02:09.927593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.028 [2024-11-18 07:02:09.927623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:17.028 [2024-11-18 07:02:09.927633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.248 ms 00:28:17.028 [2024-11-18 07:02:09.927644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.028 [2024-11-18 07:02:09.933993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.028 [2024-11-18 07:02:09.934039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:17.028 [2024-11-18 07:02:09.934056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.330 ms 00:28:17.028 [2024-11-18 07:02:09.934068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.028 [2024-11-18 07:02:09.937090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.028 [2024-11-18 07:02:09.937144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:17.028 [2024-11-18 07:02:09.937155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.936 ms 00:28:17.028 [2024-11-18 07:02:09.937165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.028 [2024-11-18 07:02:09.944327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.028 [2024-11-18 07:02:09.944381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:17.028 [2024-11-18 07:02:09.944393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.116 ms 00:28:17.028 [2024-11-18 07:02:09.944404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.028 [2024-11-18 07:02:09.944552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.028 [2024-11-18 07:02:09.944568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:17.028 [2024-11-18 07:02:09.944582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:28:17.028 [2024-11-18 07:02:09.944595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.028 [2024-11-18 07:02:09.947577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.028 [2024-11-18 07:02:09.947633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:17.028 [2024-11-18 07:02:09.947645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.963 ms 00:28:17.028 [2024-11-18 07:02:09.947655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.028 [2024-11-18 07:02:09.950558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.028 [2024-11-18 07:02:09.950611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:17.028 [2024-11-18 07:02:09.950620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.858 ms 00:28:17.028 [2024-11-18 07:02:09.950630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.028 [2024-11-18 07:02:09.952834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.028 [2024-11-18 07:02:09.952884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:17.028 [2024-11-18 07:02:09.952894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.161 ms 00:28:17.028 [2024-11-18 07:02:09.952904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.028 [2024-11-18 07:02:09.955201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.028 [2024-11-18 07:02:09.955254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:17.028 [2024-11-18 07:02:09.955264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.228 ms 00:28:17.028 [2024-11-18 07:02:09.955274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.028 [2024-11-18 07:02:09.955321] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:17.028 [2024-11-18 07:02:09.955341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:17.028 [2024-11-18 07:02:09.955680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.955694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.955703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.955713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.955723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.955733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.955741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.955750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.955758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.955767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.955776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.955786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.955793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.955804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.955811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.955820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.955827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.955840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.955848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.955858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.955865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.955875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.955882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.955903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.955912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.955933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.955941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.955952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.955961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.955971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.955996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:17.029 [2024-11-18 07:02:09.956337] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:17.029 [2024-11-18 07:02:09.956345] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ff51a709-2828-45f3-8c57-df62594139b9 00:28:17.029 [2024-11-18 07:02:09.956356] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:17.029 [2024-11-18 07:02:09.956365] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:17.029 [2024-11-18 07:02:09.956376] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:17.029 [2024-11-18 07:02:09.956384] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:17.029 [2024-11-18 07:02:09.956393] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:17.029 [2024-11-18 07:02:09.956405] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:17.029 [2024-11-18 07:02:09.956414] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:17.029 [2024-11-18 07:02:09.956421] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:17.029 [2024-11-18 07:02:09.956430] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:17.029 [2024-11-18 07:02:09.956438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.029 [2024-11-18 07:02:09.956449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:17.029 [2024-11-18 07:02:09.956458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.118 ms 00:28:17.029 [2024-11-18 07:02:09.956468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.029 [2024-11-18 07:02:09.959173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.029 [2024-11-18 07:02:09.959219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:17.029 [2024-11-18 07:02:09.959229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.685 ms 00:28:17.029 [2024-11-18 07:02:09.959244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.029 [2024-11-18 07:02:09.959343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.029 [2024-11-18 07:02:09.959361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:17.029 [2024-11-18 07:02:09.959371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:28:17.029 [2024-11-18 07:02:09.959381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.029 [2024-11-18 07:02:09.970083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.029 [2024-11-18 07:02:09.970133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:17.029 [2024-11-18 07:02:09.970144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.029 [2024-11-18 07:02:09.970158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.029 [2024-11-18 07:02:09.970226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.029 [2024-11-18 07:02:09.970241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:17.029 [2024-11-18 07:02:09.970250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.029 [2024-11-18 07:02:09.970261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.030 [2024-11-18 07:02:09.970340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.030 [2024-11-18 07:02:09.970359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:17.030 [2024-11-18 07:02:09.970369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.030 [2024-11-18 07:02:09.970380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.030 [2024-11-18 07:02:09.970403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.030 [2024-11-18 07:02:09.970414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:17.030 [2024-11-18 07:02:09.970424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.030 [2024-11-18 07:02:09.970436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.030 [2024-11-18 07:02:09.990630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.030 [2024-11-18 07:02:09.990698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:17.030 [2024-11-18 07:02:09.990710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.030 [2024-11-18 07:02:09.990732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.030 [2024-11-18 07:02:10.006905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.030 [2024-11-18 07:02:10.007018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:17.030 [2024-11-18 07:02:10.007032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.030 [2024-11-18 07:02:10.007044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.030 [2024-11-18 07:02:10.007192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.030 [2024-11-18 07:02:10.007212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:17.030 [2024-11-18 07:02:10.007222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.030 [2024-11-18 07:02:10.007234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.030 [2024-11-18 07:02:10.007286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.030 [2024-11-18 07:02:10.007303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:17.030 [2024-11-18 07:02:10.007311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.030 [2024-11-18 07:02:10.007324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.030 [2024-11-18 07:02:10.007416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.030 [2024-11-18 07:02:10.007441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:17.030 [2024-11-18 07:02:10.007451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.030 [2024-11-18 07:02:10.007462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.030 [2024-11-18 07:02:10.007509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.030 [2024-11-18 07:02:10.007526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:17.030 [2024-11-18 07:02:10.007535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.030 [2024-11-18 07:02:10.007549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.030 [2024-11-18 07:02:10.007604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.030 [2024-11-18 07:02:10.007622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:17.030 [2024-11-18 07:02:10.007632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.030 [2024-11-18 07:02:10.007645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.030 [2024-11-18 07:02:10.007705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.030 [2024-11-18 07:02:10.007723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:17.030 [2024-11-18 07:02:10.007733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.030 [2024-11-18 07:02:10.007745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.030 [2024-11-18 07:02:10.007930] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 85.033 ms, result 0 00:28:17.030 true 00:28:17.030 07:02:10 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 92650 00:28:17.030 07:02:10 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 92650 ']' 00:28:17.030 07:02:10 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 92650 00:28:17.030 07:02:10 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:28:17.030 07:02:10 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:17.030 07:02:10 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92650 00:28:17.030 07:02:10 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:17.030 07:02:10 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:17.030 killing process with pid 92650 00:28:17.030 07:02:10 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92650' 00:28:17.030 07:02:10 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 92650 00:28:17.030 07:02:10 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 92650 00:28:22.327 07:02:15 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:28:26.537 262144+0 records in 00:28:26.537 262144+0 records out 00:28:26.537 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.54547 s, 303 MB/s 00:28:26.537 07:02:18 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:27.480 07:02:20 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:27.741 [2024-11-18 07:02:20.611760] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:28:27.741 [2024-11-18 07:02:20.612340] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92860 ] 00:28:27.741 [2024-11-18 07:02:20.772198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.741 [2024-11-18 07:02:20.797489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.002 [2024-11-18 07:02:20.912478] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:28.002 [2024-11-18 07:02:20.912562] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:28.002 [2024-11-18 07:02:21.073648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.002 [2024-11-18 07:02:21.073711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:28.002 [2024-11-18 07:02:21.073729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:28.002 [2024-11-18 07:02:21.073739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.002 [2024-11-18 07:02:21.073796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.002 [2024-11-18 07:02:21.073807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:28.002 [2024-11-18 07:02:21.073816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:28:28.002 [2024-11-18 07:02:21.073824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.002 [2024-11-18 07:02:21.073847] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:28.002 [2024-11-18 07:02:21.074109] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:28.002 [2024-11-18 07:02:21.074129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.002 [2024-11-18 07:02:21.074137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:28.002 [2024-11-18 07:02:21.074146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:28:28.003 [2024-11-18 07:02:21.074157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.003 [2024-11-18 07:02:21.075917] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:28.003 [2024-11-18 07:02:21.079278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.003 [2024-11-18 07:02:21.079332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:28.003 [2024-11-18 07:02:21.079344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.365 ms 00:28:28.003 [2024-11-18 07:02:21.079359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.003 [2024-11-18 07:02:21.079431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.003 [2024-11-18 07:02:21.079449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:28.003 [2024-11-18 07:02:21.079458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:28:28.003 [2024-11-18 07:02:21.079465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.003 [2024-11-18 07:02:21.087475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.265 [2024-11-18 07:02:21.087517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:28.265 [2024-11-18 07:02:21.087534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.962 ms 00:28:28.265 [2024-11-18 07:02:21.087543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.265 [2024-11-18 07:02:21.087640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.265 [2024-11-18 07:02:21.087650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:28.265 [2024-11-18 07:02:21.087659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:28:28.265 [2024-11-18 07:02:21.087669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.265 [2024-11-18 07:02:21.087718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.265 [2024-11-18 07:02:21.087728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:28.265 [2024-11-18 07:02:21.087736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:28.265 [2024-11-18 07:02:21.087744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.265 [2024-11-18 07:02:21.087777] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:28.265 [2024-11-18 07:02:21.089724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.265 [2024-11-18 07:02:21.089761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:28.265 [2024-11-18 07:02:21.089770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.952 ms 00:28:28.265 [2024-11-18 07:02:21.089779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.265 [2024-11-18 07:02:21.089820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.265 [2024-11-18 07:02:21.089829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:28.265 [2024-11-18 07:02:21.089838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:28.265 [2024-11-18 07:02:21.089845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.265 [2024-11-18 07:02:21.089877] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:28.265 [2024-11-18 07:02:21.089897] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:28.265 [2024-11-18 07:02:21.089937] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:28.265 [2024-11-18 07:02:21.089956] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:28.265 [2024-11-18 07:02:21.090076] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:28.265 [2024-11-18 07:02:21.090088] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:28.265 [2024-11-18 07:02:21.090102] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:28.265 [2024-11-18 07:02:21.090115] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:28.265 [2024-11-18 07:02:21.090124] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:28.265 [2024-11-18 07:02:21.090133] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:28.265 [2024-11-18 07:02:21.090140] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:28.265 [2024-11-18 07:02:21.090148] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:28.265 [2024-11-18 07:02:21.090156] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:28.265 [2024-11-18 07:02:21.090170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.265 [2024-11-18 07:02:21.090177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:28.265 [2024-11-18 07:02:21.090186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:28:28.265 [2024-11-18 07:02:21.090195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.265 [2024-11-18 07:02:21.090277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.265 [2024-11-18 07:02:21.090288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:28.265 [2024-11-18 07:02:21.090296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:28.265 [2024-11-18 07:02:21.090303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.265 [2024-11-18 07:02:21.090399] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:28.265 [2024-11-18 07:02:21.090410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:28.265 [2024-11-18 07:02:21.090420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:28.265 [2024-11-18 07:02:21.090429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:28.265 [2024-11-18 07:02:21.090438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:28.265 [2024-11-18 07:02:21.090452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:28.265 [2024-11-18 07:02:21.090460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:28.265 [2024-11-18 07:02:21.090469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:28.265 [2024-11-18 07:02:21.090477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:28.265 [2024-11-18 07:02:21.090485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:28.265 [2024-11-18 07:02:21.090495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:28.265 [2024-11-18 07:02:21.090502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:28.265 [2024-11-18 07:02:21.090511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:28.266 [2024-11-18 07:02:21.090519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:28.266 [2024-11-18 07:02:21.090527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:28.266 [2024-11-18 07:02:21.090534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:28.266 [2024-11-18 07:02:21.090542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:28.266 [2024-11-18 07:02:21.090551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:28.266 [2024-11-18 07:02:21.090559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:28.266 [2024-11-18 07:02:21.090568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:28.266 [2024-11-18 07:02:21.090577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:28.266 [2024-11-18 07:02:21.090585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:28.266 [2024-11-18 07:02:21.090594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:28.266 [2024-11-18 07:02:21.090602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:28.266 [2024-11-18 07:02:21.090609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:28.266 [2024-11-18 07:02:21.090617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:28.266 [2024-11-18 07:02:21.090631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:28.266 [2024-11-18 07:02:21.090640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:28.266 [2024-11-18 07:02:21.090648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:28.266 [2024-11-18 07:02:21.090656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:28.266 [2024-11-18 07:02:21.090663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:28.266 [2024-11-18 07:02:21.090671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:28.266 [2024-11-18 07:02:21.090679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:28.266 [2024-11-18 07:02:21.090687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:28.266 [2024-11-18 07:02:21.090694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:28.266 [2024-11-18 07:02:21.090703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:28.266 [2024-11-18 07:02:21.090718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:28.266 [2024-11-18 07:02:21.090725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:28.266 [2024-11-18 07:02:21.090733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:28.266 [2024-11-18 07:02:21.090740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:28.266 [2024-11-18 07:02:21.090748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:28.266 [2024-11-18 07:02:21.090755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:28.266 [2024-11-18 07:02:21.090765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:28.266 [2024-11-18 07:02:21.090772] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:28.266 [2024-11-18 07:02:21.090781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:28.266 [2024-11-18 07:02:21.090792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:28.266 [2024-11-18 07:02:21.090800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:28.266 [2024-11-18 07:02:21.090809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:28.266 [2024-11-18 07:02:21.090817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:28.266 [2024-11-18 07:02:21.090826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:28.266 [2024-11-18 07:02:21.090833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:28.266 [2024-11-18 07:02:21.090841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:28.266 [2024-11-18 07:02:21.090849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:28.266 [2024-11-18 07:02:21.090862] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:28.266 [2024-11-18 07:02:21.090897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:28.266 [2024-11-18 07:02:21.090906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:28.266 [2024-11-18 07:02:21.090914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:28.266 [2024-11-18 07:02:21.090921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:28.266 [2024-11-18 07:02:21.090930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:28.266 [2024-11-18 07:02:21.090938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:28.266 [2024-11-18 07:02:21.090946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:28.266 [2024-11-18 07:02:21.090953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:28.266 [2024-11-18 07:02:21.090961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:28.266 [2024-11-18 07:02:21.090969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:28.266 [2024-11-18 07:02:21.091001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:28.266 [2024-11-18 07:02:21.091009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:28.266 [2024-11-18 07:02:21.091018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:28.266 [2024-11-18 07:02:21.091026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:28.266 [2024-11-18 07:02:21.091035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:28.266 [2024-11-18 07:02:21.091042] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:28.266 [2024-11-18 07:02:21.091051] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:28.266 [2024-11-18 07:02:21.091062] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:28.266 [2024-11-18 07:02:21.091070] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:28.266 [2024-11-18 07:02:21.091078] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:28.266 [2024-11-18 07:02:21.091088] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:28.266 [2024-11-18 07:02:21.091096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.266 [2024-11-18 07:02:21.091104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:28.266 [2024-11-18 07:02:21.091112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.765 ms 00:28:28.266 [2024-11-18 07:02:21.091120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.266 [2024-11-18 07:02:21.104340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.266 [2024-11-18 07:02:21.104388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:28.266 [2024-11-18 07:02:21.104400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.169 ms 00:28:28.266 [2024-11-18 07:02:21.104408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.266 [2024-11-18 07:02:21.104490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.266 [2024-11-18 07:02:21.104504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:28.266 [2024-11-18 07:02:21.104513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:28:28.266 [2024-11-18 07:02:21.104521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.266 [2024-11-18 07:02:21.127641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.266 [2024-11-18 07:02:21.127723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:28.266 [2024-11-18 07:02:21.127745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.062 ms 00:28:28.266 [2024-11-18 07:02:21.127759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.266 [2024-11-18 07:02:21.127829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.266 [2024-11-18 07:02:21.127851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:28.266 [2024-11-18 07:02:21.127871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:28.266 [2024-11-18 07:02:21.127889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.266 [2024-11-18 07:02:21.128535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.266 [2024-11-18 07:02:21.128580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:28.266 [2024-11-18 07:02:21.128596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:28:28.267 [2024-11-18 07:02:21.128609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.267 [2024-11-18 07:02:21.128809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.267 [2024-11-18 07:02:21.128823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:28.267 [2024-11-18 07:02:21.128835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:28:28.267 [2024-11-18 07:02:21.128846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.267 [2024-11-18 07:02:21.137661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.267 [2024-11-18 07:02:21.137731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:28.267 [2024-11-18 07:02:21.137754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.785 ms 00:28:28.267 [2024-11-18 07:02:21.137766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.267 [2024-11-18 07:02:21.141788] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:28.267 [2024-11-18 07:02:21.141844] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:28.267 [2024-11-18 07:02:21.141856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.267 [2024-11-18 07:02:21.141865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:28.267 [2024-11-18 07:02:21.141874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.948 ms 00:28:28.267 [2024-11-18 07:02:21.141881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.267 [2024-11-18 07:02:21.157700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.267 [2024-11-18 07:02:21.157759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:28.267 [2024-11-18 07:02:21.157776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.759 ms 00:28:28.267 [2024-11-18 07:02:21.157783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.267 [2024-11-18 07:02:21.160339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.267 [2024-11-18 07:02:21.160389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:28.267 [2024-11-18 07:02:21.160400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.504 ms 00:28:28.267 [2024-11-18 07:02:21.160407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.267 [2024-11-18 07:02:21.162948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.267 [2024-11-18 07:02:21.163008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:28.267 [2024-11-18 07:02:21.163018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.491 ms 00:28:28.267 [2024-11-18 07:02:21.163026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.267 [2024-11-18 07:02:21.163380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.267 [2024-11-18 07:02:21.163392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:28.267 [2024-11-18 07:02:21.163402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:28:28.267 [2024-11-18 07:02:21.163409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.267 [2024-11-18 07:02:21.187385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.267 [2024-11-18 07:02:21.187459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:28.267 [2024-11-18 07:02:21.187473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.957 ms 00:28:28.267 [2024-11-18 07:02:21.187482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.267 [2024-11-18 07:02:21.195750] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:28.267 [2024-11-18 07:02:21.198929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.267 [2024-11-18 07:02:21.198971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:28.267 [2024-11-18 07:02:21.199007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.387 ms 00:28:28.267 [2024-11-18 07:02:21.199018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.267 [2024-11-18 07:02:21.199101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.267 [2024-11-18 07:02:21.199112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:28.267 [2024-11-18 07:02:21.199121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:28.267 [2024-11-18 07:02:21.199130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.267 [2024-11-18 07:02:21.199201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.267 [2024-11-18 07:02:21.199217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:28.267 [2024-11-18 07:02:21.199226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:28:28.267 [2024-11-18 07:02:21.199236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.267 [2024-11-18 07:02:21.199258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.267 [2024-11-18 07:02:21.199267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:28.267 [2024-11-18 07:02:21.199276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:28.267 [2024-11-18 07:02:21.199283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.267 [2024-11-18 07:02:21.199323] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:28.267 [2024-11-18 07:02:21.199334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.267 [2024-11-18 07:02:21.199342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:28.267 [2024-11-18 07:02:21.199354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:28.267 [2024-11-18 07:02:21.199362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.267 [2024-11-18 07:02:21.204923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.267 [2024-11-18 07:02:21.204993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:28.267 [2024-11-18 07:02:21.205005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.538 ms 00:28:28.267 [2024-11-18 07:02:21.205014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.267 [2024-11-18 07:02:21.205103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.267 [2024-11-18 07:02:21.205113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:28.267 [2024-11-18 07:02:21.205131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:28:28.267 [2024-11-18 07:02:21.205143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.267 [2024-11-18 07:02:21.206365] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 132.222 ms, result 0 00:28:29.215  [2024-11-18T07:02:23.306Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-18T07:02:24.249Z] Copying: 21/1024 [MB] (10 MBps) [2024-11-18T07:02:25.636Z] Copying: 47/1024 [MB] (26 MBps) [2024-11-18T07:02:26.578Z] Copying: 67/1024 [MB] (19 MBps) [2024-11-18T07:02:27.522Z] Copying: 87/1024 [MB] (19 MBps) [2024-11-18T07:02:28.466Z] Copying: 103/1024 [MB] (16 MBps) [2024-11-18T07:02:29.409Z] Copying: 120/1024 [MB] (16 MBps) [2024-11-18T07:02:30.350Z] Copying: 141/1024 [MB] (20 MBps) [2024-11-18T07:02:31.293Z] Copying: 161/1024 [MB] (19 MBps) [2024-11-18T07:02:32.235Z] Copying: 174/1024 [MB] (13 MBps) [2024-11-18T07:02:33.620Z] Copying: 185/1024 [MB] (10 MBps) [2024-11-18T07:02:34.563Z] Copying: 197/1024 [MB] (11 MBps) [2024-11-18T07:02:35.504Z] Copying: 207/1024 [MB] (10 MBps) [2024-11-18T07:02:36.448Z] Copying: 243/1024 [MB] (35 MBps) [2024-11-18T07:02:37.391Z] Copying: 269/1024 [MB] (26 MBps) [2024-11-18T07:02:38.333Z] Copying: 279/1024 [MB] (10 MBps) [2024-11-18T07:02:39.276Z] Copying: 327/1024 [MB] (47 MBps) [2024-11-18T07:02:40.659Z] Copying: 344/1024 [MB] (16 MBps) [2024-11-18T07:02:41.230Z] Copying: 360/1024 [MB] (15 MBps) [2024-11-18T07:02:42.615Z] Copying: 375/1024 [MB] (15 MBps) [2024-11-18T07:02:43.561Z] Copying: 387/1024 [MB] (11 MBps) [2024-11-18T07:02:44.506Z] Copying: 416/1024 [MB] (29 MBps) [2024-11-18T07:02:45.451Z] Copying: 465/1024 [MB] (48 MBps) [2024-11-18T07:02:46.397Z] Copying: 515/1024 [MB] (50 MBps) [2024-11-18T07:02:47.342Z] Copying: 542/1024 [MB] (26 MBps) [2024-11-18T07:02:48.286Z] Copying: 562/1024 [MB] (20 MBps) [2024-11-18T07:02:49.230Z] Copying: 577/1024 [MB] (14 MBps) [2024-11-18T07:02:50.616Z] Copying: 591/1024 [MB] (14 MBps) [2024-11-18T07:02:51.559Z] Copying: 602/1024 [MB] (10 MBps) [2024-11-18T07:02:52.502Z] Copying: 625/1024 [MB] (23 MBps) [2024-11-18T07:02:53.445Z] Copying: 652/1024 [MB] (26 MBps) [2024-11-18T07:02:54.388Z] Copying: 700/1024 [MB] (48 MBps) [2024-11-18T07:02:55.394Z] Copying: 737/1024 [MB] (37 MBps) [2024-11-18T07:02:56.337Z] Copying: 756/1024 [MB] (18 MBps) [2024-11-18T07:02:57.280Z] Copying: 770/1024 [MB] (14 MBps) [2024-11-18T07:02:58.224Z] Copying: 791/1024 [MB] (21 MBps) [2024-11-18T07:02:59.611Z] Copying: 821/1024 [MB] (30 MBps) [2024-11-18T07:03:00.553Z] Copying: 844/1024 [MB] (22 MBps) [2024-11-18T07:03:01.498Z] Copying: 879/1024 [MB] (35 MBps) [2024-11-18T07:03:02.440Z] Copying: 895/1024 [MB] (15 MBps) [2024-11-18T07:03:03.384Z] Copying: 919/1024 [MB] (24 MBps) [2024-11-18T07:03:04.327Z] Copying: 942/1024 [MB] (23 MBps) [2024-11-18T07:03:05.271Z] Copying: 968/1024 [MB] (25 MBps) [2024-11-18T07:03:06.660Z] Copying: 994/1024 [MB] (25 MBps) [2024-11-18T07:03:06.660Z] Copying: 1023/1024 [MB] (29 MBps) [2024-11-18T07:03:06.660Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-11-18 07:03:06.237666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.573 [2024-11-18 07:03:06.237798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:13.573 [2024-11-18 07:03:06.237814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:13.573 [2024-11-18 07:03:06.237821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.573 [2024-11-18 07:03:06.237842] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:13.573 [2024-11-18 07:03:06.238270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.573 [2024-11-18 07:03:06.238286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:13.573 [2024-11-18 07:03:06.238293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:29:13.573 [2024-11-18 07:03:06.238299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.573 [2024-11-18 07:03:06.239994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.573 [2024-11-18 07:03:06.240030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:13.573 [2024-11-18 07:03:06.240038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.680 ms 00:29:13.573 [2024-11-18 07:03:06.240044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.573 [2024-11-18 07:03:06.240065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.573 [2024-11-18 07:03:06.240073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:13.573 [2024-11-18 07:03:06.240079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:13.573 [2024-11-18 07:03:06.240085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.573 [2024-11-18 07:03:06.240120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.573 [2024-11-18 07:03:06.240127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:13.573 [2024-11-18 07:03:06.240133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:13.573 [2024-11-18 07:03:06.240139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.573 [2024-11-18 07:03:06.240148] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:13.573 [2024-11-18 07:03:06.240158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:13.573 [2024-11-18 07:03:06.240167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:13.573 [2024-11-18 07:03:06.240173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:13.573 [2024-11-18 07:03:06.240178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:13.573 [2024-11-18 07:03:06.240184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:13.573 [2024-11-18 07:03:06.240190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:13.573 [2024-11-18 07:03:06.240198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:13.573 [2024-11-18 07:03:06.240203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:13.573 [2024-11-18 07:03:06.240209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:13.573 [2024-11-18 07:03:06.240215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:13.573 [2024-11-18 07:03:06.240221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:13.573 [2024-11-18 07:03:06.240228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:13.573 [2024-11-18 07:03:06.240238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:13.573 [2024-11-18 07:03:06.240244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:13.573 [2024-11-18 07:03:06.240249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:13.573 [2024-11-18 07:03:06.240255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:13.573 [2024-11-18 07:03:06.240261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:13.573 [2024-11-18 07:03:06.240266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:13.573 [2024-11-18 07:03:06.240272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:13.573 [2024-11-18 07:03:06.240277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:13.573 [2024-11-18 07:03:06.240283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:13.573 [2024-11-18 07:03:06.240289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:13.573 [2024-11-18 07:03:06.240294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:13.573 [2024-11-18 07:03:06.240300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:13.573 [2024-11-18 07:03:06.240306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:13.573 [2024-11-18 07:03:06.240312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:13.573 [2024-11-18 07:03:06.240318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:13.573 [2024-11-18 07:03:06.240323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:13.573 [2024-11-18 07:03:06.240329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:13.573 [2024-11-18 07:03:06.240335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:13.573 [2024-11-18 07:03:06.240340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:13.573 [2024-11-18 07:03:06.240346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:13.573 [2024-11-18 07:03:06.240352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:13.573 [2024-11-18 07:03:06.240357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:13.573 [2024-11-18 07:03:06.240363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:13.574 [2024-11-18 07:03:06.240747] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:13.574 [2024-11-18 07:03:06.240753] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ff51a709-2828-45f3-8c57-df62594139b9 00:29:13.574 [2024-11-18 07:03:06.240759] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:13.574 [2024-11-18 07:03:06.240764] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:29:13.574 [2024-11-18 07:03:06.240770] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:13.574 [2024-11-18 07:03:06.240776] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:13.574 [2024-11-18 07:03:06.240781] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:13.574 [2024-11-18 07:03:06.240787] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:13.574 [2024-11-18 07:03:06.240798] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:13.574 [2024-11-18 07:03:06.240804] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:13.574 [2024-11-18 07:03:06.240808] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:13.574 [2024-11-18 07:03:06.240813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.574 [2024-11-18 07:03:06.240819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:13.574 [2024-11-18 07:03:06.240825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:29:13.574 [2024-11-18 07:03:06.240832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.574 [2024-11-18 07:03:06.242045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.574 [2024-11-18 07:03:06.242067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:13.574 [2024-11-18 07:03:06.242075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.199 ms 00:29:13.574 [2024-11-18 07:03:06.242080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.574 [2024-11-18 07:03:06.242148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.574 [2024-11-18 07:03:06.242155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:13.574 [2024-11-18 07:03:06.242164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:29:13.574 [2024-11-18 07:03:06.242169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.574 [2024-11-18 07:03:06.246212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.574 [2024-11-18 07:03:06.246235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:13.574 [2024-11-18 07:03:06.246242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.574 [2024-11-18 07:03:06.246248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.574 [2024-11-18 07:03:06.246287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.574 [2024-11-18 07:03:06.246293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:13.574 [2024-11-18 07:03:06.246303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.575 [2024-11-18 07:03:06.246309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.575 [2024-11-18 07:03:06.246330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.575 [2024-11-18 07:03:06.246336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:13.575 [2024-11-18 07:03:06.246342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.575 [2024-11-18 07:03:06.246348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.575 [2024-11-18 07:03:06.246358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.575 [2024-11-18 07:03:06.246364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:13.575 [2024-11-18 07:03:06.246370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.575 [2024-11-18 07:03:06.246378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.575 [2024-11-18 07:03:06.253868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.575 [2024-11-18 07:03:06.253907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:13.575 [2024-11-18 07:03:06.253914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.575 [2024-11-18 07:03:06.253920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.575 [2024-11-18 07:03:06.259883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.575 [2024-11-18 07:03:06.259914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:13.575 [2024-11-18 07:03:06.259927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.575 [2024-11-18 07:03:06.259936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.575 [2024-11-18 07:03:06.259965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.575 [2024-11-18 07:03:06.259972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:13.575 [2024-11-18 07:03:06.260037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.575 [2024-11-18 07:03:06.260043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.575 [2024-11-18 07:03:06.260061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.575 [2024-11-18 07:03:06.260067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:13.575 [2024-11-18 07:03:06.260073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.575 [2024-11-18 07:03:06.260079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.575 [2024-11-18 07:03:06.260118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.575 [2024-11-18 07:03:06.260125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:13.575 [2024-11-18 07:03:06.260131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.575 [2024-11-18 07:03:06.260137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.575 [2024-11-18 07:03:06.260157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.575 [2024-11-18 07:03:06.260163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:13.575 [2024-11-18 07:03:06.260169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.575 [2024-11-18 07:03:06.260174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.575 [2024-11-18 07:03:06.260202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.575 [2024-11-18 07:03:06.260208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:13.575 [2024-11-18 07:03:06.260214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.575 [2024-11-18 07:03:06.260220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.575 [2024-11-18 07:03:06.260249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.575 [2024-11-18 07:03:06.260264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:13.575 [2024-11-18 07:03:06.260270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.575 [2024-11-18 07:03:06.260276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.575 [2024-11-18 07:03:06.260366] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 22.676 ms, result 0 00:29:13.575 00:29:13.575 00:29:13.835 07:03:06 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:29:13.835 [2024-11-18 07:03:06.717368] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:29:13.835 [2024-11-18 07:03:06.717491] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93328 ] 00:29:13.835 [2024-11-18 07:03:06.871326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.835 [2024-11-18 07:03:06.888805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.097 [2024-11-18 07:03:06.969540] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:14.097 [2024-11-18 07:03:06.969596] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:14.097 [2024-11-18 07:03:07.115649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.097 [2024-11-18 07:03:07.115686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:14.097 [2024-11-18 07:03:07.115696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:14.097 [2024-11-18 07:03:07.115703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.097 [2024-11-18 07:03:07.115738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.097 [2024-11-18 07:03:07.115746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:14.097 [2024-11-18 07:03:07.115752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:29:14.097 [2024-11-18 07:03:07.115757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.097 [2024-11-18 07:03:07.115773] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:14.097 [2024-11-18 07:03:07.115956] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:14.097 [2024-11-18 07:03:07.115991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.097 [2024-11-18 07:03:07.115999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:14.097 [2024-11-18 07:03:07.116009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:29:14.097 [2024-11-18 07:03:07.116016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.097 [2024-11-18 07:03:07.116297] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:14.097 [2024-11-18 07:03:07.116322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.097 [2024-11-18 07:03:07.116329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:14.097 [2024-11-18 07:03:07.116335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:29:14.097 [2024-11-18 07:03:07.116340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.097 [2024-11-18 07:03:07.116376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.097 [2024-11-18 07:03:07.116385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:14.098 [2024-11-18 07:03:07.116392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:29:14.098 [2024-11-18 07:03:07.116397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.098 [2024-11-18 07:03:07.116573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.098 [2024-11-18 07:03:07.116587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:14.098 [2024-11-18 07:03:07.116594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:29:14.098 [2024-11-18 07:03:07.116599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.098 [2024-11-18 07:03:07.116658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.098 [2024-11-18 07:03:07.116665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:14.098 [2024-11-18 07:03:07.116672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:29:14.098 [2024-11-18 07:03:07.116680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.098 [2024-11-18 07:03:07.116695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.098 [2024-11-18 07:03:07.116701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:14.098 [2024-11-18 07:03:07.116710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:14.098 [2024-11-18 07:03:07.116719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.098 [2024-11-18 07:03:07.116734] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:14.098 [2024-11-18 07:03:07.117941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.098 [2024-11-18 07:03:07.117966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:14.098 [2024-11-18 07:03:07.117983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.210 ms 00:29:14.098 [2024-11-18 07:03:07.117989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.098 [2024-11-18 07:03:07.118014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.098 [2024-11-18 07:03:07.118021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:14.098 [2024-11-18 07:03:07.118027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:14.098 [2024-11-18 07:03:07.118033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.098 [2024-11-18 07:03:07.118050] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:14.098 [2024-11-18 07:03:07.118062] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:14.098 [2024-11-18 07:03:07.118094] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:14.098 [2024-11-18 07:03:07.118109] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:14.098 [2024-11-18 07:03:07.118187] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:14.098 [2024-11-18 07:03:07.118200] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:14.098 [2024-11-18 07:03:07.118207] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:14.098 [2024-11-18 07:03:07.118216] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:14.098 [2024-11-18 07:03:07.118222] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:14.098 [2024-11-18 07:03:07.118232] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:14.098 [2024-11-18 07:03:07.118241] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:14.098 [2024-11-18 07:03:07.118246] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:14.098 [2024-11-18 07:03:07.118252] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:14.098 [2024-11-18 07:03:07.118257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.098 [2024-11-18 07:03:07.118263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:14.098 [2024-11-18 07:03:07.118269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:29:14.098 [2024-11-18 07:03:07.118274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.098 [2024-11-18 07:03:07.118336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.098 [2024-11-18 07:03:07.118349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:14.098 [2024-11-18 07:03:07.118357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:29:14.098 [2024-11-18 07:03:07.118363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.098 [2024-11-18 07:03:07.118436] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:14.098 [2024-11-18 07:03:07.118447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:14.098 [2024-11-18 07:03:07.118453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:14.098 [2024-11-18 07:03:07.118466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:14.098 [2024-11-18 07:03:07.118472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:14.098 [2024-11-18 07:03:07.118477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:14.098 [2024-11-18 07:03:07.118482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:14.098 [2024-11-18 07:03:07.118491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:14.098 [2024-11-18 07:03:07.118496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:14.098 [2024-11-18 07:03:07.118502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:14.098 [2024-11-18 07:03:07.118507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:14.098 [2024-11-18 07:03:07.118512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:14.098 [2024-11-18 07:03:07.118517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:14.098 [2024-11-18 07:03:07.118522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:14.098 [2024-11-18 07:03:07.118528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:14.098 [2024-11-18 07:03:07.118532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:14.098 [2024-11-18 07:03:07.118538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:14.098 [2024-11-18 07:03:07.118542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:14.098 [2024-11-18 07:03:07.118547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:14.098 [2024-11-18 07:03:07.118554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:14.098 [2024-11-18 07:03:07.118559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:14.098 [2024-11-18 07:03:07.118564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:14.098 [2024-11-18 07:03:07.118569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:14.098 [2024-11-18 07:03:07.118574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:14.098 [2024-11-18 07:03:07.118580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:14.098 [2024-11-18 07:03:07.118584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:14.098 [2024-11-18 07:03:07.118589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:14.098 [2024-11-18 07:03:07.118595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:14.098 [2024-11-18 07:03:07.118601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:14.098 [2024-11-18 07:03:07.118607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:14.098 [2024-11-18 07:03:07.118612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:14.098 [2024-11-18 07:03:07.118618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:14.098 [2024-11-18 07:03:07.118624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:14.098 [2024-11-18 07:03:07.118629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:14.098 [2024-11-18 07:03:07.118635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:14.098 [2024-11-18 07:03:07.118646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:14.098 [2024-11-18 07:03:07.118651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:14.098 [2024-11-18 07:03:07.118657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:14.098 [2024-11-18 07:03:07.118663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:14.098 [2024-11-18 07:03:07.118669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:14.098 [2024-11-18 07:03:07.118675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:14.098 [2024-11-18 07:03:07.118681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:14.098 [2024-11-18 07:03:07.118686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:14.098 [2024-11-18 07:03:07.118692] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:14.098 [2024-11-18 07:03:07.118699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:14.098 [2024-11-18 07:03:07.118705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:14.098 [2024-11-18 07:03:07.118711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:14.098 [2024-11-18 07:03:07.118720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:14.098 [2024-11-18 07:03:07.118726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:14.098 [2024-11-18 07:03:07.118732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:14.098 [2024-11-18 07:03:07.118738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:14.098 [2024-11-18 07:03:07.118746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:14.098 [2024-11-18 07:03:07.118751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:14.098 [2024-11-18 07:03:07.118758] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:14.098 [2024-11-18 07:03:07.118770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:14.098 [2024-11-18 07:03:07.118777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:14.098 [2024-11-18 07:03:07.118784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:14.098 [2024-11-18 07:03:07.118791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:14.098 [2024-11-18 07:03:07.118797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:14.099 [2024-11-18 07:03:07.118803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:14.099 [2024-11-18 07:03:07.118809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:14.099 [2024-11-18 07:03:07.118816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:14.099 [2024-11-18 07:03:07.118822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:14.099 [2024-11-18 07:03:07.118829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:14.099 [2024-11-18 07:03:07.118835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:14.099 [2024-11-18 07:03:07.118841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:14.099 [2024-11-18 07:03:07.118847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:14.099 [2024-11-18 07:03:07.118855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:14.099 [2024-11-18 07:03:07.118861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:14.099 [2024-11-18 07:03:07.118867] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:14.099 [2024-11-18 07:03:07.118877] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:14.099 [2024-11-18 07:03:07.118884] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:14.099 [2024-11-18 07:03:07.118890] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:14.099 [2024-11-18 07:03:07.118896] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:14.099 [2024-11-18 07:03:07.118902] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:14.099 [2024-11-18 07:03:07.118909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.099 [2024-11-18 07:03:07.118915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:14.099 [2024-11-18 07:03:07.118921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:29:14.099 [2024-11-18 07:03:07.118928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.099 [2024-11-18 07:03:07.124289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.099 [2024-11-18 07:03:07.124313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:14.099 [2024-11-18 07:03:07.124321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.318 ms 00:29:14.099 [2024-11-18 07:03:07.124327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.099 [2024-11-18 07:03:07.124388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.099 [2024-11-18 07:03:07.124394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:14.099 [2024-11-18 07:03:07.124403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:29:14.099 [2024-11-18 07:03:07.124409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.099 [2024-11-18 07:03:07.139777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.099 [2024-11-18 07:03:07.139823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:14.099 [2024-11-18 07:03:07.139835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.329 ms 00:29:14.099 [2024-11-18 07:03:07.139848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.099 [2024-11-18 07:03:07.139882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.099 [2024-11-18 07:03:07.139892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:14.099 [2024-11-18 07:03:07.139901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:14.099 [2024-11-18 07:03:07.139909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.099 [2024-11-18 07:03:07.140025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.099 [2024-11-18 07:03:07.140088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:14.099 [2024-11-18 07:03:07.140105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:29:14.099 [2024-11-18 07:03:07.140116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.099 [2024-11-18 07:03:07.140234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.099 [2024-11-18 07:03:07.140250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:14.099 [2024-11-18 07:03:07.140259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:29:14.099 [2024-11-18 07:03:07.140269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.099 [2024-11-18 07:03:07.145536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.099 [2024-11-18 07:03:07.145570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:14.099 [2024-11-18 07:03:07.145580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.243 ms 00:29:14.099 [2024-11-18 07:03:07.145593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.099 [2024-11-18 07:03:07.145696] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:14.099 [2024-11-18 07:03:07.145718] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:14.099 [2024-11-18 07:03:07.145728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.099 [2024-11-18 07:03:07.145737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:14.099 [2024-11-18 07:03:07.145750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:29:14.099 [2024-11-18 07:03:07.145758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.099 [2024-11-18 07:03:07.158397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.099 [2024-11-18 07:03:07.158425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:14.099 [2024-11-18 07:03:07.158436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.620 ms 00:29:14.099 [2024-11-18 07:03:07.158443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.099 [2024-11-18 07:03:07.158552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.099 [2024-11-18 07:03:07.158561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:14.099 [2024-11-18 07:03:07.158569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:29:14.099 [2024-11-18 07:03:07.158576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.099 [2024-11-18 07:03:07.158620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.099 [2024-11-18 07:03:07.158628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:14.099 [2024-11-18 07:03:07.158639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:29:14.099 [2024-11-18 07:03:07.158646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.099 [2024-11-18 07:03:07.158953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.099 [2024-11-18 07:03:07.158971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:14.099 [2024-11-18 07:03:07.158993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:29:14.099 [2024-11-18 07:03:07.159000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.099 [2024-11-18 07:03:07.159018] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:14.099 [2024-11-18 07:03:07.159028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.099 [2024-11-18 07:03:07.159035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:14.099 [2024-11-18 07:03:07.159046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:14.099 [2024-11-18 07:03:07.159053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.099 [2024-11-18 07:03:07.165255] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:14.099 [2024-11-18 07:03:07.165352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.099 [2024-11-18 07:03:07.165359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:14.099 [2024-11-18 07:03:07.165366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.284 ms 00:29:14.099 [2024-11-18 07:03:07.165371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.099 [2024-11-18 07:03:07.167072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.099 [2024-11-18 07:03:07.167097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:14.099 [2024-11-18 07:03:07.167104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.685 ms 00:29:14.099 [2024-11-18 07:03:07.167111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.099 [2024-11-18 07:03:07.167215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.099 [2024-11-18 07:03:07.167226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:14.099 [2024-11-18 07:03:07.167232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:29:14.099 [2024-11-18 07:03:07.167237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.099 [2024-11-18 07:03:07.167256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.099 [2024-11-18 07:03:07.167265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:14.099 [2024-11-18 07:03:07.167273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:14.099 [2024-11-18 07:03:07.167280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.099 [2024-11-18 07:03:07.167302] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:14.099 [2024-11-18 07:03:07.167309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.099 [2024-11-18 07:03:07.167314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:14.099 [2024-11-18 07:03:07.167321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:14.099 [2024-11-18 07:03:07.167327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.099 [2024-11-18 07:03:07.170883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.099 [2024-11-18 07:03:07.170916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:14.099 [2024-11-18 07:03:07.170923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.545 ms 00:29:14.099 [2024-11-18 07:03:07.170929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.099 [2024-11-18 07:03:07.171007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.099 [2024-11-18 07:03:07.171015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:14.099 [2024-11-18 07:03:07.171024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:29:14.099 [2024-11-18 07:03:07.171030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.100 [2024-11-18 07:03:07.172009] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 56.058 ms, result 0 00:29:15.486  [2024-11-18T07:03:09.515Z] Copying: 27/1024 [MB] (27 MBps) [2024-11-18T07:03:10.457Z] Copying: 50/1024 [MB] (23 MBps) [2024-11-18T07:03:11.401Z] Copying: 69/1024 [MB] (19 MBps) [2024-11-18T07:03:12.346Z] Copying: 90/1024 [MB] (20 MBps) [2024-11-18T07:03:13.733Z] Copying: 106/1024 [MB] (16 MBps) [2024-11-18T07:03:14.677Z] Copying: 120/1024 [MB] (14 MBps) [2024-11-18T07:03:15.620Z] Copying: 136/1024 [MB] (15 MBps) [2024-11-18T07:03:16.564Z] Copying: 154/1024 [MB] (18 MBps) [2024-11-18T07:03:17.508Z] Copying: 174/1024 [MB] (19 MBps) [2024-11-18T07:03:18.453Z] Copying: 197/1024 [MB] (22 MBps) [2024-11-18T07:03:19.396Z] Copying: 219/1024 [MB] (22 MBps) [2024-11-18T07:03:20.337Z] Copying: 240/1024 [MB] (20 MBps) [2024-11-18T07:03:21.721Z] Copying: 260/1024 [MB] (20 MBps) [2024-11-18T07:03:22.665Z] Copying: 283/1024 [MB] (22 MBps) [2024-11-18T07:03:23.608Z] Copying: 303/1024 [MB] (20 MBps) [2024-11-18T07:03:24.552Z] Copying: 318/1024 [MB] (14 MBps) [2024-11-18T07:03:25.495Z] Copying: 337/1024 [MB] (19 MBps) [2024-11-18T07:03:26.536Z] Copying: 351/1024 [MB] (13 MBps) [2024-11-18T07:03:27.508Z] Copying: 370/1024 [MB] (19 MBps) [2024-11-18T07:03:28.453Z] Copying: 390/1024 [MB] (20 MBps) [2024-11-18T07:03:29.403Z] Copying: 402/1024 [MB] (12 MBps) [2024-11-18T07:03:30.353Z] Copying: 413/1024 [MB] (10 MBps) [2024-11-18T07:03:31.741Z] Copying: 423/1024 [MB] (10 MBps) [2024-11-18T07:03:32.314Z] Copying: 434/1024 [MB] (10 MBps) [2024-11-18T07:03:33.702Z] Copying: 447/1024 [MB] (13 MBps) [2024-11-18T07:03:34.646Z] Copying: 468/1024 [MB] (20 MBps) [2024-11-18T07:03:35.589Z] Copying: 479/1024 [MB] (11 MBps) [2024-11-18T07:03:36.532Z] Copying: 490/1024 [MB] (10 MBps) [2024-11-18T07:03:37.475Z] Copying: 501/1024 [MB] (10 MBps) [2024-11-18T07:03:38.419Z] Copying: 518/1024 [MB] (16 MBps) [2024-11-18T07:03:39.373Z] Copying: 537/1024 [MB] (19 MBps) [2024-11-18T07:03:40.320Z] Copying: 557/1024 [MB] (19 MBps) [2024-11-18T07:03:41.706Z] Copying: 576/1024 [MB] (18 MBps) [2024-11-18T07:03:42.651Z] Copying: 591/1024 [MB] (15 MBps) [2024-11-18T07:03:43.596Z] Copying: 611/1024 [MB] (20 MBps) [2024-11-18T07:03:44.540Z] Copying: 631/1024 [MB] (19 MBps) [2024-11-18T07:03:45.482Z] Copying: 644/1024 [MB] (12 MBps) [2024-11-18T07:03:46.426Z] Copying: 659/1024 [MB] (15 MBps) [2024-11-18T07:03:47.371Z] Copying: 670/1024 [MB] (10 MBps) [2024-11-18T07:03:48.315Z] Copying: 682/1024 [MB] (12 MBps) [2024-11-18T07:03:49.705Z] Copying: 694/1024 [MB] (12 MBps) [2024-11-18T07:03:50.649Z] Copying: 708/1024 [MB] (14 MBps) [2024-11-18T07:03:51.600Z] Copying: 724/1024 [MB] (15 MBps) [2024-11-18T07:03:52.543Z] Copying: 734/1024 [MB] (10 MBps) [2024-11-18T07:03:53.488Z] Copying: 745/1024 [MB] (10 MBps) [2024-11-18T07:03:54.432Z] Copying: 763/1024 [MB] (18 MBps) [2024-11-18T07:03:55.376Z] Copying: 774/1024 [MB] (10 MBps) [2024-11-18T07:03:56.321Z] Copying: 784/1024 [MB] (10 MBps) [2024-11-18T07:03:57.709Z] Copying: 795/1024 [MB] (10 MBps) [2024-11-18T07:03:58.750Z] Copying: 806/1024 [MB] (10 MBps) [2024-11-18T07:03:59.325Z] Copying: 816/1024 [MB] (10 MBps) [2024-11-18T07:04:00.712Z] Copying: 830/1024 [MB] (13 MBps) [2024-11-18T07:04:01.653Z] Copying: 841/1024 [MB] (10 MBps) [2024-11-18T07:04:02.594Z] Copying: 859/1024 [MB] (18 MBps) [2024-11-18T07:04:03.536Z] Copying: 874/1024 [MB] (15 MBps) [2024-11-18T07:04:04.478Z] Copying: 885/1024 [MB] (10 MBps) [2024-11-18T07:04:05.421Z] Copying: 896/1024 [MB] (11 MBps) [2024-11-18T07:04:06.364Z] Copying: 907/1024 [MB] (10 MBps) [2024-11-18T07:04:07.748Z] Copying: 925/1024 [MB] (18 MBps) [2024-11-18T07:04:08.319Z] Copying: 937/1024 [MB] (11 MBps) [2024-11-18T07:04:09.707Z] Copying: 954/1024 [MB] (16 MBps) [2024-11-18T07:04:10.650Z] Copying: 965/1024 [MB] (11 MBps) [2024-11-18T07:04:11.591Z] Copying: 981/1024 [MB] (16 MBps) [2024-11-18T07:04:12.533Z] Copying: 993/1024 [MB] (11 MBps) [2024-11-18T07:04:13.475Z] Copying: 1008/1024 [MB] (15 MBps) [2024-11-18T07:04:14.047Z] Copying: 1019/1024 [MB] (10 MBps) [2024-11-18T07:04:14.310Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-18 07:04:14.055782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.223 [2024-11-18 07:04:14.055853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:21.223 [2024-11-18 07:04:14.055872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:21.223 [2024-11-18 07:04:14.055886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.223 [2024-11-18 07:04:14.055917] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:21.223 [2024-11-18 07:04:14.056482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.223 [2024-11-18 07:04:14.056518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:21.223 [2024-11-18 07:04:14.056532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:30:21.223 [2024-11-18 07:04:14.056545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.223 [2024-11-18 07:04:14.056880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.223 [2024-11-18 07:04:14.056897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:21.223 [2024-11-18 07:04:14.056910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:30:21.223 [2024-11-18 07:04:14.056923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.223 [2024-11-18 07:04:14.056964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.223 [2024-11-18 07:04:14.057001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:21.223 [2024-11-18 07:04:14.057015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:21.223 [2024-11-18 07:04:14.057028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.223 [2024-11-18 07:04:14.057104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.223 [2024-11-18 07:04:14.057117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:21.223 [2024-11-18 07:04:14.057131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:30:21.223 [2024-11-18 07:04:14.057142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.223 [2024-11-18 07:04:14.057162] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:21.223 [2024-11-18 07:04:14.057179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:21.223 [2024-11-18 07:04:14.057737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.057749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.057763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.057775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.057787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.057800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.057813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.057826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.057837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.057849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.057862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.057875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.057887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.057899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.057911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.057924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.057936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.057948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.057960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.057971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:21.224 [2024-11-18 07:04:14.058438] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:21.224 [2024-11-18 07:04:14.058453] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ff51a709-2828-45f3-8c57-df62594139b9 00:30:21.224 [2024-11-18 07:04:14.058465] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:21.224 [2024-11-18 07:04:14.058477] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:30:21.224 [2024-11-18 07:04:14.058488] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:21.224 [2024-11-18 07:04:14.058499] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:21.224 [2024-11-18 07:04:14.058514] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:21.224 [2024-11-18 07:04:14.058527] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:21.224 [2024-11-18 07:04:14.058538] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:21.224 [2024-11-18 07:04:14.058548] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:21.224 [2024-11-18 07:04:14.058558] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:21.224 [2024-11-18 07:04:14.058569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.224 [2024-11-18 07:04:14.058581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:21.224 [2024-11-18 07:04:14.058594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.408 ms 00:30:21.224 [2024-11-18 07:04:14.058612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.224 [2024-11-18 07:04:14.060722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.224 [2024-11-18 07:04:14.060761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:21.224 [2024-11-18 07:04:14.060777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.085 ms 00:30:21.224 [2024-11-18 07:04:14.060790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.224 [2024-11-18 07:04:14.060891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.224 [2024-11-18 07:04:14.060914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:21.224 [2024-11-18 07:04:14.060926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:30:21.224 [2024-11-18 07:04:14.060943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.224 [2024-11-18 07:04:14.067419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:21.224 [2024-11-18 07:04:14.067454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:21.224 [2024-11-18 07:04:14.067468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:21.224 [2024-11-18 07:04:14.067479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.224 [2024-11-18 07:04:14.067534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:21.224 [2024-11-18 07:04:14.067546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:21.224 [2024-11-18 07:04:14.067553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:21.224 [2024-11-18 07:04:14.067568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.224 [2024-11-18 07:04:14.067617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:21.224 [2024-11-18 07:04:14.067627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:21.224 [2024-11-18 07:04:14.067635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:21.224 [2024-11-18 07:04:14.067642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.224 [2024-11-18 07:04:14.067658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:21.224 [2024-11-18 07:04:14.067666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:21.224 [2024-11-18 07:04:14.067673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:21.224 [2024-11-18 07:04:14.067680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.224 [2024-11-18 07:04:14.076805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:21.224 [2024-11-18 07:04:14.076843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:21.224 [2024-11-18 07:04:14.076860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:21.224 [2024-11-18 07:04:14.076868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.224 [2024-11-18 07:04:14.084828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:21.225 [2024-11-18 07:04:14.084865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:21.225 [2024-11-18 07:04:14.084875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:21.225 [2024-11-18 07:04:14.084889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.225 [2024-11-18 07:04:14.084912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:21.225 [2024-11-18 07:04:14.084920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:21.225 [2024-11-18 07:04:14.084928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:21.225 [2024-11-18 07:04:14.084940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.225 [2024-11-18 07:04:14.084993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:21.225 [2024-11-18 07:04:14.085003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:21.225 [2024-11-18 07:04:14.085011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:21.225 [2024-11-18 07:04:14.085019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.225 [2024-11-18 07:04:14.085073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:21.225 [2024-11-18 07:04:14.085090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:21.225 [2024-11-18 07:04:14.085098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:21.225 [2024-11-18 07:04:14.085106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.225 [2024-11-18 07:04:14.085127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:21.225 [2024-11-18 07:04:14.085140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:21.225 [2024-11-18 07:04:14.085147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:21.225 [2024-11-18 07:04:14.085154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.225 [2024-11-18 07:04:14.085190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:21.225 [2024-11-18 07:04:14.085200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:21.225 [2024-11-18 07:04:14.085212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:21.225 [2024-11-18 07:04:14.085220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.225 [2024-11-18 07:04:14.085260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:21.225 [2024-11-18 07:04:14.085281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:21.225 [2024-11-18 07:04:14.085293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:21.225 [2024-11-18 07:04:14.085300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.225 [2024-11-18 07:04:14.085413] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 29.611 ms, result 0 00:30:21.225 00:30:21.225 00:30:21.225 07:04:14 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:23.774 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:23.774 07:04:16 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:30:23.774 [2024-11-18 07:04:16.454872] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:30:23.775 [2024-11-18 07:04:16.455083] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94025 ] 00:30:23.775 [2024-11-18 07:04:16.622296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.775 [2024-11-18 07:04:16.652079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.775 [2024-11-18 07:04:16.762106] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:23.775 [2024-11-18 07:04:16.762185] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:24.036 [2024-11-18 07:04:16.924535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.036 [2024-11-18 07:04:16.924597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:24.036 [2024-11-18 07:04:16.924614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:24.036 [2024-11-18 07:04:16.924623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.036 [2024-11-18 07:04:16.924687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.036 [2024-11-18 07:04:16.924698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:24.036 [2024-11-18 07:04:16.924707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:30:24.036 [2024-11-18 07:04:16.924715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.036 [2024-11-18 07:04:16.924739] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:24.036 [2024-11-18 07:04:16.925165] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:24.036 [2024-11-18 07:04:16.925215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.036 [2024-11-18 07:04:16.925225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:24.036 [2024-11-18 07:04:16.925235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.480 ms 00:30:24.036 [2024-11-18 07:04:16.925246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.036 [2024-11-18 07:04:16.925548] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:24.036 [2024-11-18 07:04:16.925585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.036 [2024-11-18 07:04:16.925598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:24.036 [2024-11-18 07:04:16.925607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:30:24.036 [2024-11-18 07:04:16.925624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.036 [2024-11-18 07:04:16.925685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.036 [2024-11-18 07:04:16.925699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:24.036 [2024-11-18 07:04:16.925714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:30:24.036 [2024-11-18 07:04:16.925724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.036 [2024-11-18 07:04:16.926050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.036 [2024-11-18 07:04:16.926068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:24.037 [2024-11-18 07:04:16.926078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:30:24.037 [2024-11-18 07:04:16.926092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.037 [2024-11-18 07:04:16.926185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.037 [2024-11-18 07:04:16.926233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:24.037 [2024-11-18 07:04:16.926242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:30:24.037 [2024-11-18 07:04:16.926250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.037 [2024-11-18 07:04:16.926280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.037 [2024-11-18 07:04:16.926291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:24.037 [2024-11-18 07:04:16.926300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:24.037 [2024-11-18 07:04:16.926310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.037 [2024-11-18 07:04:16.926331] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:24.037 [2024-11-18 07:04:16.928593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.037 [2024-11-18 07:04:16.928634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:24.037 [2024-11-18 07:04:16.928647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.260 ms 00:30:24.037 [2024-11-18 07:04:16.928657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.037 [2024-11-18 07:04:16.928695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.037 [2024-11-18 07:04:16.928704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:24.037 [2024-11-18 07:04:16.928714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:30:24.037 [2024-11-18 07:04:16.928730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.037 [2024-11-18 07:04:16.928783] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:24.037 [2024-11-18 07:04:16.928808] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:24.037 [2024-11-18 07:04:16.928851] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:24.037 [2024-11-18 07:04:16.928874] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:24.037 [2024-11-18 07:04:16.928997] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:24.037 [2024-11-18 07:04:16.929016] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:24.037 [2024-11-18 07:04:16.929031] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:24.037 [2024-11-18 07:04:16.929043] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:24.037 [2024-11-18 07:04:16.929055] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:24.037 [2024-11-18 07:04:16.929067] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:24.037 [2024-11-18 07:04:16.929075] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:24.037 [2024-11-18 07:04:16.929086] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:24.037 [2024-11-18 07:04:16.929094] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:24.037 [2024-11-18 07:04:16.929105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.037 [2024-11-18 07:04:16.929114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:24.037 [2024-11-18 07:04:16.929128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:30:24.037 [2024-11-18 07:04:16.929136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.037 [2024-11-18 07:04:16.929227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.037 [2024-11-18 07:04:16.929239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:24.037 [2024-11-18 07:04:16.929251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:24.037 [2024-11-18 07:04:16.929259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.037 [2024-11-18 07:04:16.929357] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:24.037 [2024-11-18 07:04:16.929378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:24.037 [2024-11-18 07:04:16.929391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:24.037 [2024-11-18 07:04:16.929399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:24.037 [2024-11-18 07:04:16.929412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:24.037 [2024-11-18 07:04:16.929421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:24.037 [2024-11-18 07:04:16.929431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:24.037 [2024-11-18 07:04:16.929446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:24.037 [2024-11-18 07:04:16.929454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:24.037 [2024-11-18 07:04:16.929461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:24.037 [2024-11-18 07:04:16.929469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:24.037 [2024-11-18 07:04:16.929478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:24.037 [2024-11-18 07:04:16.929486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:24.037 [2024-11-18 07:04:16.929495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:24.037 [2024-11-18 07:04:16.929502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:24.037 [2024-11-18 07:04:16.929509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:24.037 [2024-11-18 07:04:16.929516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:24.037 [2024-11-18 07:04:16.929522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:24.037 [2024-11-18 07:04:16.929534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:24.037 [2024-11-18 07:04:16.929542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:24.037 [2024-11-18 07:04:16.929548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:24.037 [2024-11-18 07:04:16.929555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:24.037 [2024-11-18 07:04:16.929562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:24.037 [2024-11-18 07:04:16.929570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:24.037 [2024-11-18 07:04:16.929576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:24.037 [2024-11-18 07:04:16.929582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:24.037 [2024-11-18 07:04:16.929588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:24.037 [2024-11-18 07:04:16.929595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:24.037 [2024-11-18 07:04:16.929601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:24.037 [2024-11-18 07:04:16.929608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:24.037 [2024-11-18 07:04:16.929615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:24.037 [2024-11-18 07:04:16.929622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:24.037 [2024-11-18 07:04:16.929629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:24.037 [2024-11-18 07:04:16.929639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:24.037 [2024-11-18 07:04:16.929652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:24.037 [2024-11-18 07:04:16.929660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:24.037 [2024-11-18 07:04:16.929666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:24.037 [2024-11-18 07:04:16.929673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:24.037 [2024-11-18 07:04:16.929680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:24.037 [2024-11-18 07:04:16.929687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:24.037 [2024-11-18 07:04:16.929694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:24.037 [2024-11-18 07:04:16.929700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:24.037 [2024-11-18 07:04:16.929707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:24.037 [2024-11-18 07:04:16.929715] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:24.037 [2024-11-18 07:04:16.929724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:24.037 [2024-11-18 07:04:16.929732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:24.037 [2024-11-18 07:04:16.929739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:24.037 [2024-11-18 07:04:16.929753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:24.037 [2024-11-18 07:04:16.929763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:24.037 [2024-11-18 07:04:16.929769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:24.038 [2024-11-18 07:04:16.929779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:24.038 [2024-11-18 07:04:16.929786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:24.038 [2024-11-18 07:04:16.929794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:24.038 [2024-11-18 07:04:16.929804] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:24.038 [2024-11-18 07:04:16.929814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:24.038 [2024-11-18 07:04:16.929826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:24.038 [2024-11-18 07:04:16.929834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:24.038 [2024-11-18 07:04:16.929842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:24.038 [2024-11-18 07:04:16.929849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:24.038 [2024-11-18 07:04:16.929857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:24.038 [2024-11-18 07:04:16.929866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:24.038 [2024-11-18 07:04:16.929875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:24.038 [2024-11-18 07:04:16.929881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:24.038 [2024-11-18 07:04:16.929888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:24.038 [2024-11-18 07:04:16.929896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:24.038 [2024-11-18 07:04:16.929903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:24.038 [2024-11-18 07:04:16.929913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:24.038 [2024-11-18 07:04:16.929920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:24.038 [2024-11-18 07:04:16.929927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:24.038 [2024-11-18 07:04:16.929934] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:24.038 [2024-11-18 07:04:16.929943] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:24.038 [2024-11-18 07:04:16.929951] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:24.038 [2024-11-18 07:04:16.929958] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:24.038 [2024-11-18 07:04:16.929968] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:24.038 [2024-11-18 07:04:16.929995] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:24.038 [2024-11-18 07:04:16.930006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.038 [2024-11-18 07:04:16.930014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:24.038 [2024-11-18 07:04:16.930023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.718 ms 00:30:24.038 [2024-11-18 07:04:16.930031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.038 [2024-11-18 07:04:16.940076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.038 [2024-11-18 07:04:16.940121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:24.038 [2024-11-18 07:04:16.940132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.999 ms 00:30:24.038 [2024-11-18 07:04:16.940140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.038 [2024-11-18 07:04:16.940223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.038 [2024-11-18 07:04:16.940232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:24.038 [2024-11-18 07:04:16.940241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:30:24.038 [2024-11-18 07:04:16.940250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.038 [2024-11-18 07:04:16.959386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.038 [2024-11-18 07:04:16.959442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:24.038 [2024-11-18 07:04:16.959455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.077 ms 00:30:24.038 [2024-11-18 07:04:16.959465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.038 [2024-11-18 07:04:16.959512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.038 [2024-11-18 07:04:16.959522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:24.038 [2024-11-18 07:04:16.959531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:24.038 [2024-11-18 07:04:16.959539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.038 [2024-11-18 07:04:16.959654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.038 [2024-11-18 07:04:16.959667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:24.038 [2024-11-18 07:04:16.959684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:30:24.038 [2024-11-18 07:04:16.959693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.038 [2024-11-18 07:04:16.959819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.038 [2024-11-18 07:04:16.959859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:24.038 [2024-11-18 07:04:16.959870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:30:24.038 [2024-11-18 07:04:16.959880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.038 [2024-11-18 07:04:16.967539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.038 [2024-11-18 07:04:16.967594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:24.038 [2024-11-18 07:04:16.967604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.638 ms 00:30:24.038 [2024-11-18 07:04:16.967616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.038 [2024-11-18 07:04:16.967737] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:24.038 [2024-11-18 07:04:16.967756] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:24.038 [2024-11-18 07:04:16.967767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.038 [2024-11-18 07:04:16.967776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:24.038 [2024-11-18 07:04:16.967788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:30:24.038 [2024-11-18 07:04:16.967796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.038 [2024-11-18 07:04:16.980430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.038 [2024-11-18 07:04:16.980474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:24.038 [2024-11-18 07:04:16.980492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.612 ms 00:30:24.038 [2024-11-18 07:04:16.980500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.038 [2024-11-18 07:04:16.980633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.038 [2024-11-18 07:04:16.980645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:24.038 [2024-11-18 07:04:16.980656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:30:24.038 [2024-11-18 07:04:16.980664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.038 [2024-11-18 07:04:16.980719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.038 [2024-11-18 07:04:16.980732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:24.038 [2024-11-18 07:04:16.980744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:24.038 [2024-11-18 07:04:16.980753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.038 [2024-11-18 07:04:16.981120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.038 [2024-11-18 07:04:16.981151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:24.038 [2024-11-18 07:04:16.981163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:30:24.038 [2024-11-18 07:04:16.981175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.038 [2024-11-18 07:04:16.981197] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:24.038 [2024-11-18 07:04:16.981211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.038 [2024-11-18 07:04:16.981219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:24.038 [2024-11-18 07:04:16.981230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:30:24.038 [2024-11-18 07:04:16.981238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.038 [2024-11-18 07:04:16.990526] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:24.038 [2024-11-18 07:04:16.990689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.038 [2024-11-18 07:04:16.990701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:24.038 [2024-11-18 07:04:16.990712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.433 ms 00:30:24.038 [2024-11-18 07:04:16.990720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.038 [2024-11-18 07:04:16.993207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.038 [2024-11-18 07:04:16.993246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:24.038 [2024-11-18 07:04:16.993257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.456 ms 00:30:24.038 [2024-11-18 07:04:16.993268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.038 [2024-11-18 07:04:16.993366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.038 [2024-11-18 07:04:16.993376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:24.038 [2024-11-18 07:04:16.993388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:30:24.039 [2024-11-18 07:04:16.993398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.039 [2024-11-18 07:04:16.993427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.039 [2024-11-18 07:04:16.993443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:24.039 [2024-11-18 07:04:16.993452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:24.039 [2024-11-18 07:04:16.993464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.039 [2024-11-18 07:04:16.993502] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:24.039 [2024-11-18 07:04:16.993513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.039 [2024-11-18 07:04:16.993521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:24.039 [2024-11-18 07:04:16.993530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:24.039 [2024-11-18 07:04:16.993539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.039 [2024-11-18 07:04:16.999909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.039 [2024-11-18 07:04:16.999968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:24.039 [2024-11-18 07:04:16.999997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.346 ms 00:30:24.039 [2024-11-18 07:04:17.000006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.039 [2024-11-18 07:04:17.000102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.039 [2024-11-18 07:04:17.000113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:24.039 [2024-11-18 07:04:17.000123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:30:24.039 [2024-11-18 07:04:17.000135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.039 [2024-11-18 07:04:17.001303] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 76.306 ms, result 0 00:30:24.982  [2024-11-18T07:04:19.453Z] Copying: 13/1024 [MB] (13 MBps) [2024-11-18T07:04:20.022Z] Copying: 24/1024 [MB] (10 MBps) [2024-11-18T07:04:21.405Z] Copying: 41/1024 [MB] (17 MBps) [2024-11-18T07:04:22.348Z] Copying: 81/1024 [MB] (39 MBps) [2024-11-18T07:04:23.290Z] Copying: 107/1024 [MB] (25 MBps) [2024-11-18T07:04:24.231Z] Copying: 118/1024 [MB] (11 MBps) [2024-11-18T07:04:25.171Z] Copying: 128/1024 [MB] (10 MBps) [2024-11-18T07:04:26.113Z] Copying: 142200/1048576 [kB] (10136 kBps) [2024-11-18T07:04:27.054Z] Copying: 155/1024 [MB] (16 MBps) [2024-11-18T07:04:28.437Z] Copying: 168/1024 [MB] (13 MBps) [2024-11-18T07:04:29.387Z] Copying: 202/1024 [MB] (33 MBps) [2024-11-18T07:04:30.330Z] Copying: 228/1024 [MB] (26 MBps) [2024-11-18T07:04:31.349Z] Copying: 246/1024 [MB] (17 MBps) [2024-11-18T07:04:32.294Z] Copying: 263/1024 [MB] (17 MBps) [2024-11-18T07:04:33.240Z] Copying: 274/1024 [MB] (11 MBps) [2024-11-18T07:04:34.183Z] Copying: 285/1024 [MB] (10 MBps) [2024-11-18T07:04:35.128Z] Copying: 296/1024 [MB] (10 MBps) [2024-11-18T07:04:36.072Z] Copying: 306/1024 [MB] (10 MBps) [2024-11-18T07:04:37.018Z] Copying: 322/1024 [MB] (15 MBps) [2024-11-18T07:04:38.403Z] Copying: 342/1024 [MB] (20 MBps) [2024-11-18T07:04:39.346Z] Copying: 393/1024 [MB] (51 MBps) [2024-11-18T07:04:40.289Z] Copying: 444/1024 [MB] (50 MBps) [2024-11-18T07:04:41.235Z] Copying: 497/1024 [MB] (52 MBps) [2024-11-18T07:04:42.179Z] Copying: 514/1024 [MB] (17 MBps) [2024-11-18T07:04:43.121Z] Copying: 524/1024 [MB] (10 MBps) [2024-11-18T07:04:44.062Z] Copying: 536/1024 [MB] (11 MBps) [2024-11-18T07:04:45.451Z] Copying: 546/1024 [MB] (10 MBps) [2024-11-18T07:04:46.023Z] Copying: 557/1024 [MB] (10 MBps) [2024-11-18T07:04:47.409Z] Copying: 575/1024 [MB] (18 MBps) [2024-11-18T07:04:48.352Z] Copying: 588/1024 [MB] (12 MBps) [2024-11-18T07:04:49.295Z] Copying: 604/1024 [MB] (15 MBps) [2024-11-18T07:04:50.236Z] Copying: 615/1024 [MB] (11 MBps) [2024-11-18T07:04:51.181Z] Copying: 629/1024 [MB] (13 MBps) [2024-11-18T07:04:52.125Z] Copying: 642/1024 [MB] (13 MBps) [2024-11-18T07:04:53.073Z] Copying: 659/1024 [MB] (16 MBps) [2024-11-18T07:04:54.018Z] Copying: 675/1024 [MB] (16 MBps) [2024-11-18T07:04:55.406Z] Copying: 692/1024 [MB] (16 MBps) [2024-11-18T07:04:56.352Z] Copying: 719/1024 [MB] (26 MBps) [2024-11-18T07:04:57.297Z] Copying: 771/1024 [MB] (51 MBps) [2024-11-18T07:04:58.241Z] Copying: 787/1024 [MB] (16 MBps) [2024-11-18T07:04:59.186Z] Copying: 797/1024 [MB] (10 MBps) [2024-11-18T07:05:00.127Z] Copying: 833/1024 [MB] (35 MBps) [2024-11-18T07:05:01.072Z] Copying: 851/1024 [MB] (18 MBps) [2024-11-18T07:05:02.490Z] Copying: 870/1024 [MB] (19 MBps) [2024-11-18T07:05:03.087Z] Copying: 885/1024 [MB] (15 MBps) [2024-11-18T07:05:04.034Z] Copying: 897/1024 [MB] (11 MBps) [2024-11-18T07:05:05.421Z] Copying: 916/1024 [MB] (18 MBps) [2024-11-18T07:05:06.365Z] Copying: 933/1024 [MB] (17 MBps) [2024-11-18T07:05:07.307Z] Copying: 950/1024 [MB] (16 MBps) [2024-11-18T07:05:08.253Z] Copying: 965/1024 [MB] (15 MBps) [2024-11-18T07:05:09.198Z] Copying: 983/1024 [MB] (18 MBps) [2024-11-18T07:05:10.141Z] Copying: 1003/1024 [MB] (20 MBps) [2024-11-18T07:05:11.086Z] Copying: 1018/1024 [MB] (14 MBps) [2024-11-18T07:05:11.661Z] Copying: 1048156/1048576 [kB] (5112 kBps) [2024-11-18T07:05:11.661Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-18 07:05:11.531586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.574 [2024-11-18 07:05:11.531663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:18.574 [2024-11-18 07:05:11.531681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:18.574 [2024-11-18 07:05:11.531691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.574 [2024-11-18 07:05:11.533864] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:18.574 [2024-11-18 07:05:11.537202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.574 [2024-11-18 07:05:11.537258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:18.574 [2024-11-18 07:05:11.537272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.283 ms 00:31:18.574 [2024-11-18 07:05:11.537281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.574 [2024-11-18 07:05:11.546661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.574 [2024-11-18 07:05:11.546721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:18.574 [2024-11-18 07:05:11.546733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.429 ms 00:31:18.574 [2024-11-18 07:05:11.546742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.574 [2024-11-18 07:05:11.546771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.574 [2024-11-18 07:05:11.546781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:18.574 [2024-11-18 07:05:11.546791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:18.574 [2024-11-18 07:05:11.546800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.574 [2024-11-18 07:05:11.546860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.574 [2024-11-18 07:05:11.546871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:18.574 [2024-11-18 07:05:11.546882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:31:18.574 [2024-11-18 07:05:11.546890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.574 [2024-11-18 07:05:11.546904] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:18.574 [2024-11-18 07:05:11.546918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128768 / 261120 wr_cnt: 1 state: open 00:31:18.574 [2024-11-18 07:05:11.546928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.546937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.546946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.546954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.546963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.546972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:18.574 [2024-11-18 07:05:11.547363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:18.575 [2024-11-18 07:05:11.547813] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:18.575 [2024-11-18 07:05:11.547834] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ff51a709-2828-45f3-8c57-df62594139b9 00:31:18.575 [2024-11-18 07:05:11.547843] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128768 00:31:18.575 [2024-11-18 07:05:11.547851] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128800 00:31:18.575 [2024-11-18 07:05:11.547860] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128768 00:31:18.575 [2024-11-18 07:05:11.547867] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:31:18.575 [2024-11-18 07:05:11.547886] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:18.575 [2024-11-18 07:05:11.547897] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:18.575 [2024-11-18 07:05:11.547905] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:18.575 [2024-11-18 07:05:11.547912] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:18.575 [2024-11-18 07:05:11.547919] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:18.575 [2024-11-18 07:05:11.547926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.575 [2024-11-18 07:05:11.547934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:18.575 [2024-11-18 07:05:11.547942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.023 ms 00:31:18.575 [2024-11-18 07:05:11.547951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.575 [2024-11-18 07:05:11.550376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.575 [2024-11-18 07:05:11.550416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:18.575 [2024-11-18 07:05:11.550431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.408 ms 00:31:18.575 [2024-11-18 07:05:11.550448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.575 [2024-11-18 07:05:11.550578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.575 [2024-11-18 07:05:11.550588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:18.575 [2024-11-18 07:05:11.550598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:31:18.575 [2024-11-18 07:05:11.550606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.575 [2024-11-18 07:05:11.557942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.575 [2024-11-18 07:05:11.558026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:18.575 [2024-11-18 07:05:11.558038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.575 [2024-11-18 07:05:11.558046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.575 [2024-11-18 07:05:11.558106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.575 [2024-11-18 07:05:11.558116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:18.575 [2024-11-18 07:05:11.558125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.575 [2024-11-18 07:05:11.558139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.575 [2024-11-18 07:05:11.558173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.575 [2024-11-18 07:05:11.558183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:18.575 [2024-11-18 07:05:11.558195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.575 [2024-11-18 07:05:11.558203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.575 [2024-11-18 07:05:11.558224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.575 [2024-11-18 07:05:11.558235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:18.575 [2024-11-18 07:05:11.558243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.575 [2024-11-18 07:05:11.558251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.575 [2024-11-18 07:05:11.572009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.575 [2024-11-18 07:05:11.572072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:18.575 [2024-11-18 07:05:11.572088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.575 [2024-11-18 07:05:11.572097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.575 [2024-11-18 07:05:11.583844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.575 [2024-11-18 07:05:11.583898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:18.576 [2024-11-18 07:05:11.583911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.576 [2024-11-18 07:05:11.583920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.576 [2024-11-18 07:05:11.583990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.576 [2024-11-18 07:05:11.584001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:18.576 [2024-11-18 07:05:11.584020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.576 [2024-11-18 07:05:11.584029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.576 [2024-11-18 07:05:11.584072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.576 [2024-11-18 07:05:11.584082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:18.576 [2024-11-18 07:05:11.584096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.576 [2024-11-18 07:05:11.584106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.576 [2024-11-18 07:05:11.584166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.576 [2024-11-18 07:05:11.584186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:18.576 [2024-11-18 07:05:11.584194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.576 [2024-11-18 07:05:11.584202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.576 [2024-11-18 07:05:11.584237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.576 [2024-11-18 07:05:11.584247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:18.576 [2024-11-18 07:05:11.584255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.576 [2024-11-18 07:05:11.584264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.576 [2024-11-18 07:05:11.584302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.576 [2024-11-18 07:05:11.584317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:18.576 [2024-11-18 07:05:11.584325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.576 [2024-11-18 07:05:11.584333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.576 [2024-11-18 07:05:11.584388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.576 [2024-11-18 07:05:11.584404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:18.576 [2024-11-18 07:05:11.584413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.576 [2024-11-18 07:05:11.584421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.576 [2024-11-18 07:05:11.584558] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 54.258 ms, result 0 00:31:19.519 00:31:19.519 00:31:19.519 07:05:12 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:31:19.519 [2024-11-18 07:05:12.487721] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:31:19.519 [2024-11-18 07:05:12.487887] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94586 ] 00:31:19.781 [2024-11-18 07:05:12.653186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:19.781 [2024-11-18 07:05:12.682099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.781 [2024-11-18 07:05:12.798394] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:19.781 [2024-11-18 07:05:12.798485] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:20.043 [2024-11-18 07:05:12.960365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.043 [2024-11-18 07:05:12.960430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:20.043 [2024-11-18 07:05:12.960447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:20.043 [2024-11-18 07:05:12.960456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.043 [2024-11-18 07:05:12.960517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.043 [2024-11-18 07:05:12.960528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:20.043 [2024-11-18 07:05:12.960537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:31:20.043 [2024-11-18 07:05:12.960546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.043 [2024-11-18 07:05:12.960574] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:20.043 [2024-11-18 07:05:12.960852] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:20.043 [2024-11-18 07:05:12.960880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.043 [2024-11-18 07:05:12.960889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:20.043 [2024-11-18 07:05:12.960898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:31:20.043 [2024-11-18 07:05:12.960909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.043 [2024-11-18 07:05:12.961344] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:20.043 [2024-11-18 07:05:12.961391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.044 [2024-11-18 07:05:12.961401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:20.044 [2024-11-18 07:05:12.961411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:31:20.044 [2024-11-18 07:05:12.961420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.044 [2024-11-18 07:05:12.961478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.044 [2024-11-18 07:05:12.961491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:20.044 [2024-11-18 07:05:12.961499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:31:20.044 [2024-11-18 07:05:12.961507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.044 [2024-11-18 07:05:12.961840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.044 [2024-11-18 07:05:12.961865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:20.044 [2024-11-18 07:05:12.961875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:31:20.044 [2024-11-18 07:05:12.961886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.044 [2024-11-18 07:05:12.961973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.044 [2024-11-18 07:05:12.962000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:20.044 [2024-11-18 07:05:12.962013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:31:20.044 [2024-11-18 07:05:12.962022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.044 [2024-11-18 07:05:12.962051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.044 [2024-11-18 07:05:12.962066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:20.044 [2024-11-18 07:05:12.962076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:20.044 [2024-11-18 07:05:12.962083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.044 [2024-11-18 07:05:12.962112] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:20.044 [2024-11-18 07:05:12.964298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.044 [2024-11-18 07:05:12.964333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:20.044 [2024-11-18 07:05:12.964344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.191 ms 00:31:20.044 [2024-11-18 07:05:12.964353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.044 [2024-11-18 07:05:12.964392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.044 [2024-11-18 07:05:12.964409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:20.044 [2024-11-18 07:05:12.964419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:31:20.044 [2024-11-18 07:05:12.964427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.044 [2024-11-18 07:05:12.964476] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:20.044 [2024-11-18 07:05:12.964499] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:20.044 [2024-11-18 07:05:12.964539] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:20.044 [2024-11-18 07:05:12.964556] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:20.044 [2024-11-18 07:05:12.964662] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:20.044 [2024-11-18 07:05:12.964675] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:20.044 [2024-11-18 07:05:12.964687] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:20.044 [2024-11-18 07:05:12.964699] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:20.044 [2024-11-18 07:05:12.964710] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:20.044 [2024-11-18 07:05:12.964723] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:20.044 [2024-11-18 07:05:12.964732] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:20.044 [2024-11-18 07:05:12.964740] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:20.044 [2024-11-18 07:05:12.964753] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:20.044 [2024-11-18 07:05:12.964762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.044 [2024-11-18 07:05:12.964770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:20.044 [2024-11-18 07:05:12.964778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:31:20.044 [2024-11-18 07:05:12.964786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.044 [2024-11-18 07:05:12.964874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.044 [2024-11-18 07:05:12.964884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:20.044 [2024-11-18 07:05:12.964896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:31:20.044 [2024-11-18 07:05:12.964903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.044 [2024-11-18 07:05:12.965018] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:20.044 [2024-11-18 07:05:12.965038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:20.044 [2024-11-18 07:05:12.965047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:20.044 [2024-11-18 07:05:12.965055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:20.044 [2024-11-18 07:05:12.965063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:20.044 [2024-11-18 07:05:12.965069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:20.044 [2024-11-18 07:05:12.965080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:20.044 [2024-11-18 07:05:12.965095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:20.044 [2024-11-18 07:05:12.965102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:20.044 [2024-11-18 07:05:12.965109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:20.044 [2024-11-18 07:05:12.965116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:20.044 [2024-11-18 07:05:12.965124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:20.044 [2024-11-18 07:05:12.965130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:20.044 [2024-11-18 07:05:12.965137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:20.044 [2024-11-18 07:05:12.965143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:20.044 [2024-11-18 07:05:12.965150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:20.044 [2024-11-18 07:05:12.965157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:20.044 [2024-11-18 07:05:12.965167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:20.044 [2024-11-18 07:05:12.965174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:20.044 [2024-11-18 07:05:12.965181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:20.044 [2024-11-18 07:05:12.965188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:20.044 [2024-11-18 07:05:12.965195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:20.044 [2024-11-18 07:05:12.965205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:20.044 [2024-11-18 07:05:12.965212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:20.044 [2024-11-18 07:05:12.965219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:20.044 [2024-11-18 07:05:12.965225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:20.044 [2024-11-18 07:05:12.965232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:20.044 [2024-11-18 07:05:12.965239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:20.044 [2024-11-18 07:05:12.965246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:20.044 [2024-11-18 07:05:12.965252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:20.044 [2024-11-18 07:05:12.965259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:20.044 [2024-11-18 07:05:12.965266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:20.044 [2024-11-18 07:05:12.965272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:20.044 [2024-11-18 07:05:12.965278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:20.044 [2024-11-18 07:05:12.965285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:20.044 [2024-11-18 07:05:12.965292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:20.044 [2024-11-18 07:05:12.965299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:20.044 [2024-11-18 07:05:12.965305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:20.044 [2024-11-18 07:05:12.965315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:20.044 [2024-11-18 07:05:12.965323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:20.044 [2024-11-18 07:05:12.965329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:20.044 [2024-11-18 07:05:12.965336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:20.044 [2024-11-18 07:05:12.965343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:20.044 [2024-11-18 07:05:12.965349] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:20.044 [2024-11-18 07:05:12.965358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:20.044 [2024-11-18 07:05:12.965366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:20.044 [2024-11-18 07:05:12.965373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:20.044 [2024-11-18 07:05:12.965383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:20.044 [2024-11-18 07:05:12.965390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:20.044 [2024-11-18 07:05:12.965398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:20.044 [2024-11-18 07:05:12.965405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:20.044 [2024-11-18 07:05:12.965412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:20.044 [2024-11-18 07:05:12.965418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:20.044 [2024-11-18 07:05:12.965426] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:20.045 [2024-11-18 07:05:12.965439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:20.045 [2024-11-18 07:05:12.965452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:20.045 [2024-11-18 07:05:12.965460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:20.045 [2024-11-18 07:05:12.965469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:20.045 [2024-11-18 07:05:12.965477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:20.045 [2024-11-18 07:05:12.965484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:20.045 [2024-11-18 07:05:12.965492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:20.045 [2024-11-18 07:05:12.965499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:20.045 [2024-11-18 07:05:12.965507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:20.045 [2024-11-18 07:05:12.965515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:20.045 [2024-11-18 07:05:12.965522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:20.045 [2024-11-18 07:05:12.965529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:20.045 [2024-11-18 07:05:12.965536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:20.045 [2024-11-18 07:05:12.965546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:20.045 [2024-11-18 07:05:12.965553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:20.045 [2024-11-18 07:05:12.965560] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:20.045 [2024-11-18 07:05:12.965574] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:20.045 [2024-11-18 07:05:12.965582] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:20.045 [2024-11-18 07:05:12.965590] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:20.045 [2024-11-18 07:05:12.965597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:20.045 [2024-11-18 07:05:12.965604] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:20.045 [2024-11-18 07:05:12.965611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.045 [2024-11-18 07:05:12.965618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:20.045 [2024-11-18 07:05:12.965626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.676 ms 00:31:20.045 [2024-11-18 07:05:12.965634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.045 [2024-11-18 07:05:12.975387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.045 [2024-11-18 07:05:12.975428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:20.045 [2024-11-18 07:05:12.975439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.712 ms 00:31:20.045 [2024-11-18 07:05:12.975448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.045 [2024-11-18 07:05:12.975534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.045 [2024-11-18 07:05:12.975542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:20.045 [2024-11-18 07:05:12.975550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:31:20.045 [2024-11-18 07:05:12.975558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.045 [2024-11-18 07:05:12.996407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.045 [2024-11-18 07:05:12.996466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:20.045 [2024-11-18 07:05:12.996479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.794 ms 00:31:20.045 [2024-11-18 07:05:12.996493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.045 [2024-11-18 07:05:12.996540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.045 [2024-11-18 07:05:12.996550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:20.045 [2024-11-18 07:05:12.996559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:20.045 [2024-11-18 07:05:12.996566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.045 [2024-11-18 07:05:12.996669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.045 [2024-11-18 07:05:12.996682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:20.045 [2024-11-18 07:05:12.996694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:31:20.045 [2024-11-18 07:05:12.996702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.045 [2024-11-18 07:05:12.996828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.045 [2024-11-18 07:05:12.996837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:20.045 [2024-11-18 07:05:12.996846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:31:20.045 [2024-11-18 07:05:12.996854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.045 [2024-11-18 07:05:13.004733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.045 [2024-11-18 07:05:13.004780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:20.045 [2024-11-18 07:05:13.004799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.859 ms 00:31:20.045 [2024-11-18 07:05:13.004811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.045 [2024-11-18 07:05:13.004944] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:31:20.045 [2024-11-18 07:05:13.004959] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:20.045 [2024-11-18 07:05:13.004970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.045 [2024-11-18 07:05:13.004997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:20.045 [2024-11-18 07:05:13.005007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:31:20.045 [2024-11-18 07:05:13.005016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.045 [2024-11-18 07:05:13.018096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.045 [2024-11-18 07:05:13.018136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:20.045 [2024-11-18 07:05:13.018147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.058 ms 00:31:20.045 [2024-11-18 07:05:13.018155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.045 [2024-11-18 07:05:13.018284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.045 [2024-11-18 07:05:13.018294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:20.045 [2024-11-18 07:05:13.018308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:31:20.045 [2024-11-18 07:05:13.018316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.045 [2024-11-18 07:05:13.018369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.045 [2024-11-18 07:05:13.018385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:20.045 [2024-11-18 07:05:13.018396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:31:20.045 [2024-11-18 07:05:13.018403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.045 [2024-11-18 07:05:13.018716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.045 [2024-11-18 07:05:13.018727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:20.045 [2024-11-18 07:05:13.018735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:31:20.045 [2024-11-18 07:05:13.018742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.045 [2024-11-18 07:05:13.018758] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:20.045 [2024-11-18 07:05:13.018767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.045 [2024-11-18 07:05:13.018781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:20.045 [2024-11-18 07:05:13.018801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:20.045 [2024-11-18 07:05:13.018809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.045 [2024-11-18 07:05:13.028357] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:20.045 [2024-11-18 07:05:13.028526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.045 [2024-11-18 07:05:13.028538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:20.045 [2024-11-18 07:05:13.028549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.700 ms 00:31:20.045 [2024-11-18 07:05:13.028557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.045 [2024-11-18 07:05:13.031109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.045 [2024-11-18 07:05:13.031152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:20.045 [2024-11-18 07:05:13.031164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.526 ms 00:31:20.045 [2024-11-18 07:05:13.031173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.045 [2024-11-18 07:05:13.031253] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:31:20.045 [2024-11-18 07:05:13.031849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.045 [2024-11-18 07:05:13.031867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:20.045 [2024-11-18 07:05:13.031878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.614 ms 00:31:20.045 [2024-11-18 07:05:13.031888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.045 [2024-11-18 07:05:13.031915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.045 [2024-11-18 07:05:13.031923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:20.045 [2024-11-18 07:05:13.031931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:20.045 [2024-11-18 07:05:13.031938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.045 [2024-11-18 07:05:13.031988] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:20.045 [2024-11-18 07:05:13.031999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.045 [2024-11-18 07:05:13.032007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:20.045 [2024-11-18 07:05:13.032015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:31:20.046 [2024-11-18 07:05:13.032023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.046 [2024-11-18 07:05:13.038162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.046 [2024-11-18 07:05:13.038348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:20.046 [2024-11-18 07:05:13.038381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.119 ms 00:31:20.046 [2024-11-18 07:05:13.038389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.046 [2024-11-18 07:05:13.038471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.046 [2024-11-18 07:05:13.038481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:20.046 [2024-11-18 07:05:13.038490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:31:20.046 [2024-11-18 07:05:13.038498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.046 [2024-11-18 07:05:13.039841] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 79.027 ms, result 0 00:31:21.434  [2024-11-18T07:05:15.463Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-18T07:05:16.407Z] Copying: 30/1024 [MB] (10 MBps) [2024-11-18T07:05:17.351Z] Copying: 40/1024 [MB] (10 MBps) [2024-11-18T07:05:18.297Z] Copying: 51/1024 [MB] (10 MBps) [2024-11-18T07:05:19.238Z] Copying: 62/1024 [MB] (10 MBps) [2024-11-18T07:05:20.623Z] Copying: 72/1024 [MB] (10 MBps) [2024-11-18T07:05:21.565Z] Copying: 83/1024 [MB] (10 MBps) [2024-11-18T07:05:22.509Z] Copying: 93/1024 [MB] (10 MBps) [2024-11-18T07:05:23.452Z] Copying: 104/1024 [MB] (10 MBps) [2024-11-18T07:05:24.397Z] Copying: 115/1024 [MB] (10 MBps) [2024-11-18T07:05:25.341Z] Copying: 125/1024 [MB] (10 MBps) [2024-11-18T07:05:26.286Z] Copying: 136/1024 [MB] (10 MBps) [2024-11-18T07:05:27.239Z] Copying: 146/1024 [MB] (10 MBps) [2024-11-18T07:05:28.638Z] Copying: 158/1024 [MB] (11 MBps) [2024-11-18T07:05:29.581Z] Copying: 169/1024 [MB] (10 MBps) [2024-11-18T07:05:30.525Z] Copying: 180/1024 [MB] (10 MBps) [2024-11-18T07:05:31.470Z] Copying: 190/1024 [MB] (10 MBps) [2024-11-18T07:05:32.415Z] Copying: 201/1024 [MB] (10 MBps) [2024-11-18T07:05:33.359Z] Copying: 212/1024 [MB] (10 MBps) [2024-11-18T07:05:34.376Z] Copying: 222/1024 [MB] (10 MBps) [2024-11-18T07:05:35.320Z] Copying: 233/1024 [MB] (10 MBps) [2024-11-18T07:05:36.264Z] Copying: 243/1024 [MB] (10 MBps) [2024-11-18T07:05:37.649Z] Copying: 254/1024 [MB] (10 MBps) [2024-11-18T07:05:38.594Z] Copying: 265/1024 [MB] (10 MBps) [2024-11-18T07:05:39.535Z] Copying: 276/1024 [MB] (10 MBps) [2024-11-18T07:05:40.479Z] Copying: 286/1024 [MB] (10 MBps) [2024-11-18T07:05:41.423Z] Copying: 297/1024 [MB] (10 MBps) [2024-11-18T07:05:42.368Z] Copying: 308/1024 [MB] (10 MBps) [2024-11-18T07:05:43.313Z] Copying: 318/1024 [MB] (10 MBps) [2024-11-18T07:05:44.257Z] Copying: 331/1024 [MB] (12 MBps) [2024-11-18T07:05:45.646Z] Copying: 341/1024 [MB] (10 MBps) [2024-11-18T07:05:46.590Z] Copying: 353/1024 [MB] (11 MBps) [2024-11-18T07:05:47.535Z] Copying: 366/1024 [MB] (12 MBps) [2024-11-18T07:05:48.478Z] Copying: 376/1024 [MB] (10 MBps) [2024-11-18T07:05:49.423Z] Copying: 388/1024 [MB] (11 MBps) [2024-11-18T07:05:50.368Z] Copying: 411/1024 [MB] (23 MBps) [2024-11-18T07:05:51.309Z] Copying: 431/1024 [MB] (19 MBps) [2024-11-18T07:05:52.251Z] Copying: 442/1024 [MB] (10 MBps) [2024-11-18T07:05:53.639Z] Copying: 453/1024 [MB] (10 MBps) [2024-11-18T07:05:54.585Z] Copying: 463/1024 [MB] (10 MBps) [2024-11-18T07:05:55.529Z] Copying: 483/1024 [MB] (19 MBps) [2024-11-18T07:05:56.472Z] Copying: 493/1024 [MB] (10 MBps) [2024-11-18T07:05:57.413Z] Copying: 510/1024 [MB] (16 MBps) [2024-11-18T07:05:58.355Z] Copying: 525/1024 [MB] (15 MBps) [2024-11-18T07:05:59.297Z] Copying: 536/1024 [MB] (10 MBps) [2024-11-18T07:06:00.241Z] Copying: 559/1024 [MB] (23 MBps) [2024-11-18T07:06:01.675Z] Copying: 579/1024 [MB] (19 MBps) [2024-11-18T07:06:02.247Z] Copying: 602/1024 [MB] (23 MBps) [2024-11-18T07:06:03.635Z] Copying: 620/1024 [MB] (17 MBps) [2024-11-18T07:06:04.579Z] Copying: 645/1024 [MB] (25 MBps) [2024-11-18T07:06:05.522Z] Copying: 662/1024 [MB] (17 MBps) [2024-11-18T07:06:06.565Z] Copying: 686/1024 [MB] (23 MBps) [2024-11-18T07:06:07.507Z] Copying: 707/1024 [MB] (21 MBps) [2024-11-18T07:06:08.451Z] Copying: 729/1024 [MB] (21 MBps) [2024-11-18T07:06:09.395Z] Copying: 746/1024 [MB] (17 MBps) [2024-11-18T07:06:10.338Z] Copying: 762/1024 [MB] (15 MBps) [2024-11-18T07:06:11.281Z] Copying: 783/1024 [MB] (20 MBps) [2024-11-18T07:06:12.671Z] Copying: 798/1024 [MB] (15 MBps) [2024-11-18T07:06:13.246Z] Copying: 818/1024 [MB] (19 MBps) [2024-11-18T07:06:14.638Z] Copying: 835/1024 [MB] (17 MBps) [2024-11-18T07:06:15.579Z] Copying: 849/1024 [MB] (13 MBps) [2024-11-18T07:06:16.524Z] Copying: 865/1024 [MB] (15 MBps) [2024-11-18T07:06:17.467Z] Copying: 879/1024 [MB] (14 MBps) [2024-11-18T07:06:18.411Z] Copying: 894/1024 [MB] (14 MBps) [2024-11-18T07:06:19.360Z] Copying: 905/1024 [MB] (10 MBps) [2024-11-18T07:06:20.306Z] Copying: 916/1024 [MB] (11 MBps) [2024-11-18T07:06:21.251Z] Copying: 927/1024 [MB] (10 MBps) [2024-11-18T07:06:22.637Z] Copying: 937/1024 [MB] (10 MBps) [2024-11-18T07:06:23.582Z] Copying: 948/1024 [MB] (10 MBps) [2024-11-18T07:06:24.527Z] Copying: 970/1024 [MB] (21 MBps) [2024-11-18T07:06:25.471Z] Copying: 980/1024 [MB] (10 MBps) [2024-11-18T07:06:26.415Z] Copying: 991/1024 [MB] (10 MBps) [2024-11-18T07:06:27.360Z] Copying: 1002/1024 [MB] (10 MBps) [2024-11-18T07:06:28.305Z] Copying: 1012/1024 [MB] (10 MBps) [2024-11-18T07:06:28.305Z] Copying: 1023/1024 [MB] (10 MBps) [2024-11-18T07:06:28.568Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-18 07:06:28.382451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.481 [2024-11-18 07:06:28.382552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:35.481 [2024-11-18 07:06:28.382577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:35.481 [2024-11-18 07:06:28.382589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.481 [2024-11-18 07:06:28.382619] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:35.481 [2024-11-18 07:06:28.383506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.481 [2024-11-18 07:06:28.383542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:35.481 [2024-11-18 07:06:28.383557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.868 ms 00:32:35.481 [2024-11-18 07:06:28.383569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.481 [2024-11-18 07:06:28.383889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.481 [2024-11-18 07:06:28.383901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:35.481 [2024-11-18 07:06:28.383913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:32:35.481 [2024-11-18 07:06:28.383924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.481 [2024-11-18 07:06:28.383962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.481 [2024-11-18 07:06:28.384001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:35.481 [2024-11-18 07:06:28.384017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:35.481 [2024-11-18 07:06:28.384028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.481 [2024-11-18 07:06:28.384102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.481 [2024-11-18 07:06:28.384117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:35.481 [2024-11-18 07:06:28.384129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:32:35.481 [2024-11-18 07:06:28.384139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.481 [2024-11-18 07:06:28.384157] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:35.481 [2024-11-18 07:06:28.384181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:32:35.481 [2024-11-18 07:06:28.384199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:35.481 [2024-11-18 07:06:28.384576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.384970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.385006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.385017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.385027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.385037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.385047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.385058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.385069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.385079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.385088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.385098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.385109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.385118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.385131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.385141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.385151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.385161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.385171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.385181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.385190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.385201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.385211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.385220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.385231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.385240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.385250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.385260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.385270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:35.482 [2024-11-18 07:06:28.385290] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:35.482 [2024-11-18 07:06:28.385300] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ff51a709-2828-45f3-8c57-df62594139b9 00:32:35.482 [2024-11-18 07:06:28.385316] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:32:35.482 [2024-11-18 07:06:28.385326] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 2336 00:32:35.482 [2024-11-18 07:06:28.385335] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 2304 00:32:35.482 [2024-11-18 07:06:28.385345] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0139 00:32:35.482 [2024-11-18 07:06:28.385361] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:35.482 [2024-11-18 07:06:28.385371] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:35.482 [2024-11-18 07:06:28.385381] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:35.482 [2024-11-18 07:06:28.385390] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:35.482 [2024-11-18 07:06:28.385398] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:35.482 [2024-11-18 07:06:28.385407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.482 [2024-11-18 07:06:28.385417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:35.482 [2024-11-18 07:06:28.385428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.251 ms 00:32:35.482 [2024-11-18 07:06:28.385437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.482 [2024-11-18 07:06:28.389268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.482 [2024-11-18 07:06:28.389431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:35.482 [2024-11-18 07:06:28.389502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.810 ms 00:32:35.482 [2024-11-18 07:06:28.389527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.483 [2024-11-18 07:06:28.389672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.483 [2024-11-18 07:06:28.389698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:35.483 [2024-11-18 07:06:28.389720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:32:35.483 [2024-11-18 07:06:28.389740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.483 [2024-11-18 07:06:28.397449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.483 [2024-11-18 07:06:28.397630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:35.483 [2024-11-18 07:06:28.397687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.483 [2024-11-18 07:06:28.397710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.483 [2024-11-18 07:06:28.397806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.483 [2024-11-18 07:06:28.397830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:35.483 [2024-11-18 07:06:28.397850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.483 [2024-11-18 07:06:28.397870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.483 [2024-11-18 07:06:28.397947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.483 [2024-11-18 07:06:28.398095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:35.483 [2024-11-18 07:06:28.398135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.483 [2024-11-18 07:06:28.398157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.483 [2024-11-18 07:06:28.398191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.483 [2024-11-18 07:06:28.398387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:35.483 [2024-11-18 07:06:28.398432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.483 [2024-11-18 07:06:28.398452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.483 [2024-11-18 07:06:28.412214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.483 [2024-11-18 07:06:28.412400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:35.483 [2024-11-18 07:06:28.412454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.483 [2024-11-18 07:06:28.412476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.483 [2024-11-18 07:06:28.422957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.483 [2024-11-18 07:06:28.423121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:35.483 [2024-11-18 07:06:28.423172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.483 [2024-11-18 07:06:28.423215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.483 [2024-11-18 07:06:28.423275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.483 [2024-11-18 07:06:28.423298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:35.483 [2024-11-18 07:06:28.423308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.483 [2024-11-18 07:06:28.423319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.483 [2024-11-18 07:06:28.423354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.483 [2024-11-18 07:06:28.423363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:35.483 [2024-11-18 07:06:28.423371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.483 [2024-11-18 07:06:28.423379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.483 [2024-11-18 07:06:28.423432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.483 [2024-11-18 07:06:28.423450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:35.483 [2024-11-18 07:06:28.423462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.483 [2024-11-18 07:06:28.423470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.483 [2024-11-18 07:06:28.423501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.483 [2024-11-18 07:06:28.423510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:35.483 [2024-11-18 07:06:28.423519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.483 [2024-11-18 07:06:28.423527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.483 [2024-11-18 07:06:28.423565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.483 [2024-11-18 07:06:28.423574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:35.483 [2024-11-18 07:06:28.423582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.483 [2024-11-18 07:06:28.423590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.483 [2024-11-18 07:06:28.423635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.483 [2024-11-18 07:06:28.423645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:35.483 [2024-11-18 07:06:28.423653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.483 [2024-11-18 07:06:28.423661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.483 [2024-11-18 07:06:28.423792] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 41.310 ms, result 0 00:32:35.744 00:32:35.744 00:32:35.744 07:06:28 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:38.290 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:38.290 07:06:30 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:32:38.290 07:06:30 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:32:38.290 07:06:30 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:38.290 07:06:30 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:38.290 07:06:31 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:38.290 07:06:31 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 92650 00:32:38.290 07:06:31 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 92650 ']' 00:32:38.290 Process with pid 92650 is not found 00:32:38.290 07:06:31 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 92650 00:32:38.290 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (92650) - No such process 00:32:38.290 07:06:31 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 92650 is not found' 00:32:38.290 07:06:31 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:32:38.290 Remove shared memory files 00:32:38.290 07:06:31 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:38.290 07:06:31 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:32:38.290 07:06:31 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_ff51a709-2828-45f3-8c57-df62594139b9_band_md /dev/hugepages/ftl_ff51a709-2828-45f3-8c57-df62594139b9_l2p_l1 /dev/hugepages/ftl_ff51a709-2828-45f3-8c57-df62594139b9_l2p_l2 /dev/hugepages/ftl_ff51a709-2828-45f3-8c57-df62594139b9_l2p_l2_ctx /dev/hugepages/ftl_ff51a709-2828-45f3-8c57-df62594139b9_nvc_md /dev/hugepages/ftl_ff51a709-2828-45f3-8c57-df62594139b9_p2l_pool /dev/hugepages/ftl_ff51a709-2828-45f3-8c57-df62594139b9_sb /dev/hugepages/ftl_ff51a709-2828-45f3-8c57-df62594139b9_sb_shm /dev/hugepages/ftl_ff51a709-2828-45f3-8c57-df62594139b9_trim_bitmap /dev/hugepages/ftl_ff51a709-2828-45f3-8c57-df62594139b9_trim_log /dev/hugepages/ftl_ff51a709-2828-45f3-8c57-df62594139b9_trim_md /dev/hugepages/ftl_ff51a709-2828-45f3-8c57-df62594139b9_vmap 00:32:38.290 07:06:31 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:32:38.290 07:06:31 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:38.290 07:06:31 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:32:38.290 00:32:38.290 real 4m29.538s 00:32:38.290 user 4m17.338s 00:32:38.290 sys 0m11.884s 00:32:38.290 07:06:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:38.290 07:06:31 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:32:38.290 ************************************ 00:32:38.290 END TEST ftl_restore_fast 00:32:38.290 ************************************ 00:32:38.290 07:06:31 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:32:38.290 07:06:31 ftl -- ftl/ftl.sh@14 -- # killprocess 83744 00:32:38.290 07:06:31 ftl -- common/autotest_common.sh@954 -- # '[' -z 83744 ']' 00:32:38.290 07:06:31 ftl -- common/autotest_common.sh@958 -- # kill -0 83744 00:32:38.290 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (83744) - No such process 00:32:38.290 Process with pid 83744 is not found 00:32:38.290 07:06:31 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 83744 is not found' 00:32:38.290 07:06:31 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:32:38.290 07:06:31 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=95407 00:32:38.290 07:06:31 ftl -- ftl/ftl.sh@20 -- # waitforlisten 95407 00:32:38.290 07:06:31 ftl -- common/autotest_common.sh@835 -- # '[' -z 95407 ']' 00:32:38.290 07:06:31 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:38.290 07:06:31 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:38.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:38.290 07:06:31 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:38.290 07:06:31 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:38.290 07:06:31 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:38.290 07:06:31 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:38.290 [2024-11-18 07:06:31.161378] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:32:38.290 [2024-11-18 07:06:31.161528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95407 ] 00:32:38.290 [2024-11-18 07:06:31.320660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:38.290 [2024-11-18 07:06:31.349445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:39.235 07:06:32 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:39.235 07:06:32 ftl -- common/autotest_common.sh@868 -- # return 0 00:32:39.235 07:06:32 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:32:39.235 nvme0n1 00:32:39.235 07:06:32 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:32:39.235 07:06:32 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:39.235 07:06:32 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:39.496 07:06:32 ftl -- ftl/common.sh@28 -- # stores=b9d2a40d-7b0d-4880-9ff1-93f106986b36 00:32:39.496 07:06:32 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:32:39.496 07:06:32 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b9d2a40d-7b0d-4880-9ff1-93f106986b36 00:32:39.757 07:06:32 ftl -- ftl/ftl.sh@23 -- # killprocess 95407 00:32:39.757 07:06:32 ftl -- common/autotest_common.sh@954 -- # '[' -z 95407 ']' 00:32:39.757 07:06:32 ftl -- common/autotest_common.sh@958 -- # kill -0 95407 00:32:39.757 07:06:32 ftl -- common/autotest_common.sh@959 -- # uname 00:32:39.757 07:06:32 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:39.757 07:06:32 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95407 00:32:39.757 killing process with pid 95407 00:32:39.757 07:06:32 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:39.757 07:06:32 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:39.757 07:06:32 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95407' 00:32:39.757 07:06:32 ftl -- common/autotest_common.sh@973 -- # kill 95407 00:32:39.757 07:06:32 ftl -- common/autotest_common.sh@978 -- # wait 95407 00:32:40.330 07:06:33 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:40.330 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:40.330 Waiting for block devices as requested 00:32:40.591 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:40.591 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:40.591 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:40.591 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:45.883 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:45.883 Remove shared memory files 00:32:45.883 07:06:38 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:32:45.883 07:06:38 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:45.883 07:06:38 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:32:45.883 07:06:38 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:32:45.883 07:06:38 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:32:45.883 07:06:38 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:45.883 07:06:38 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:32:45.883 00:32:45.883 real 17m33.897s 00:32:45.883 user 19m29.467s 00:32:45.883 sys 1m22.080s 00:32:45.883 07:06:38 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:45.883 07:06:38 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:45.883 ************************************ 00:32:45.883 END TEST ftl 00:32:45.883 ************************************ 00:32:45.883 07:06:38 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:45.883 07:06:38 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:32:45.883 07:06:38 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:45.883 07:06:38 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:32:45.883 07:06:38 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:45.883 07:06:38 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:45.883 07:06:38 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:32:45.883 07:06:38 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:32:45.883 07:06:38 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:32:45.883 07:06:38 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:32:45.883 07:06:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:45.883 07:06:38 -- common/autotest_common.sh@10 -- # set +x 00:32:45.883 07:06:38 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:32:45.883 07:06:38 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:32:45.883 07:06:38 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:32:45.883 07:06:38 -- common/autotest_common.sh@10 -- # set +x 00:32:47.330 INFO: APP EXITING 00:32:47.330 INFO: killing all VMs 00:32:47.330 INFO: killing vhost app 00:32:47.330 INFO: EXIT DONE 00:32:47.627 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:47.888 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:32:47.888 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:32:47.888 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:32:47.888 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:32:48.461 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:48.723 Cleaning 00:32:48.723 Removing: /var/run/dpdk/spdk0/config 00:32:48.723 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:48.723 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:48.723 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:48.723 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:48.723 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:48.723 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:48.723 Removing: /var/run/dpdk/spdk0 00:32:48.723 Removing: /var/run/dpdk/spdk_pid69224 00:32:48.723 Removing: /var/run/dpdk/spdk_pid69377 00:32:48.723 Removing: /var/run/dpdk/spdk_pid69579 00:32:48.723 Removing: /var/run/dpdk/spdk_pid69666 00:32:48.723 Removing: /var/run/dpdk/spdk_pid69689 00:32:48.723 Removing: /var/run/dpdk/spdk_pid69801 00:32:48.723 Removing: /var/run/dpdk/spdk_pid69813 00:32:48.723 Removing: /var/run/dpdk/spdk_pid69996 00:32:48.723 Removing: /var/run/dpdk/spdk_pid70069 00:32:48.723 Removing: /var/run/dpdk/spdk_pid70149 00:32:48.723 Removing: /var/run/dpdk/spdk_pid70249 00:32:48.723 Removing: /var/run/dpdk/spdk_pid70329 00:32:48.723 Removing: /var/run/dpdk/spdk_pid70363 00:32:48.723 Removing: /var/run/dpdk/spdk_pid70400 00:32:48.723 Removing: /var/run/dpdk/spdk_pid70470 00:32:48.723 Removing: /var/run/dpdk/spdk_pid70565 00:32:48.723 Removing: /var/run/dpdk/spdk_pid70985 00:32:48.723 Removing: /var/run/dpdk/spdk_pid71037 00:32:48.723 Removing: /var/run/dpdk/spdk_pid71079 00:32:48.723 Removing: /var/run/dpdk/spdk_pid71095 00:32:48.723 Removing: /var/run/dpdk/spdk_pid71153 00:32:48.723 Removing: /var/run/dpdk/spdk_pid71169 00:32:48.723 Removing: /var/run/dpdk/spdk_pid71227 00:32:48.723 Removing: /var/run/dpdk/spdk_pid71232 00:32:48.723 Removing: /var/run/dpdk/spdk_pid71285 00:32:48.723 Removing: /var/run/dpdk/spdk_pid71297 00:32:48.723 Removing: /var/run/dpdk/spdk_pid71345 00:32:48.723 Removing: /var/run/dpdk/spdk_pid71352 00:32:48.723 Removing: /var/run/dpdk/spdk_pid71489 00:32:48.723 Removing: /var/run/dpdk/spdk_pid71521 00:32:48.723 Removing: /var/run/dpdk/spdk_pid71599 00:32:48.723 Removing: /var/run/dpdk/spdk_pid71766 00:32:48.723 Removing: /var/run/dpdk/spdk_pid71833 00:32:48.723 Removing: /var/run/dpdk/spdk_pid71864 00:32:48.723 Removing: /var/run/dpdk/spdk_pid72286 00:32:48.723 Removing: /var/run/dpdk/spdk_pid72379 00:32:48.723 Removing: /var/run/dpdk/spdk_pid72482 00:32:48.723 Removing: /var/run/dpdk/spdk_pid72519 00:32:48.723 Removing: /var/run/dpdk/spdk_pid72544 00:32:48.723 Removing: /var/run/dpdk/spdk_pid72623 00:32:48.723 Removing: /var/run/dpdk/spdk_pid73238 00:32:48.723 Removing: /var/run/dpdk/spdk_pid73269 00:32:48.723 Removing: /var/run/dpdk/spdk_pid73733 00:32:48.723 Removing: /var/run/dpdk/spdk_pid73820 00:32:48.723 Removing: /var/run/dpdk/spdk_pid73923 00:32:48.723 Removing: /var/run/dpdk/spdk_pid73965 00:32:48.723 Removing: /var/run/dpdk/spdk_pid73985 00:32:48.723 Removing: /var/run/dpdk/spdk_pid74005 00:32:48.985 Removing: /var/run/dpdk/spdk_pid75847 00:32:48.985 Removing: /var/run/dpdk/spdk_pid75968 00:32:48.985 Removing: /var/run/dpdk/spdk_pid75972 00:32:48.985 Removing: /var/run/dpdk/spdk_pid75984 00:32:48.985 Removing: /var/run/dpdk/spdk_pid76028 00:32:48.985 Removing: /var/run/dpdk/spdk_pid76032 00:32:48.985 Removing: /var/run/dpdk/spdk_pid76044 00:32:48.985 Removing: /var/run/dpdk/spdk_pid76083 00:32:48.985 Removing: /var/run/dpdk/spdk_pid76087 00:32:48.985 Removing: /var/run/dpdk/spdk_pid76099 00:32:48.985 Removing: /var/run/dpdk/spdk_pid76145 00:32:48.985 Removing: /var/run/dpdk/spdk_pid76149 00:32:48.985 Removing: /var/run/dpdk/spdk_pid76161 00:32:48.985 Removing: /var/run/dpdk/spdk_pid77537 00:32:48.985 Removing: /var/run/dpdk/spdk_pid77623 00:32:48.985 Removing: /var/run/dpdk/spdk_pid79016 00:32:48.985 Removing: /var/run/dpdk/spdk_pid80367 00:32:48.985 Removing: /var/run/dpdk/spdk_pid80436 00:32:48.985 Removing: /var/run/dpdk/spdk_pid80485 00:32:48.985 Removing: /var/run/dpdk/spdk_pid80541 00:32:48.985 Removing: /var/run/dpdk/spdk_pid80618 00:32:48.985 Removing: /var/run/dpdk/spdk_pid80681 00:32:48.985 Removing: /var/run/dpdk/spdk_pid80818 00:32:48.985 Removing: /var/run/dpdk/spdk_pid81166 00:32:48.985 Removing: /var/run/dpdk/spdk_pid81190 00:32:48.985 Removing: /var/run/dpdk/spdk_pid81630 00:32:48.985 Removing: /var/run/dpdk/spdk_pid81806 00:32:48.985 Removing: /var/run/dpdk/spdk_pid81898 00:32:48.985 Removing: /var/run/dpdk/spdk_pid82003 00:32:48.985 Removing: /var/run/dpdk/spdk_pid82034 00:32:48.985 Removing: /var/run/dpdk/spdk_pid82065 00:32:48.985 Removing: /var/run/dpdk/spdk_pid82346 00:32:48.985 Removing: /var/run/dpdk/spdk_pid82390 00:32:48.985 Removing: /var/run/dpdk/spdk_pid82440 00:32:48.985 Removing: /var/run/dpdk/spdk_pid82807 00:32:48.985 Removing: /var/run/dpdk/spdk_pid82945 00:32:48.985 Removing: /var/run/dpdk/spdk_pid83744 00:32:48.985 Removing: /var/run/dpdk/spdk_pid83854 00:32:48.985 Removing: /var/run/dpdk/spdk_pid84019 00:32:48.985 Removing: /var/run/dpdk/spdk_pid84096 00:32:48.985 Removing: /var/run/dpdk/spdk_pid84382 00:32:48.985 Removing: /var/run/dpdk/spdk_pid84635 00:32:48.985 Removing: /var/run/dpdk/spdk_pid84988 00:32:48.985 Removing: /var/run/dpdk/spdk_pid85153 00:32:48.985 Removing: /var/run/dpdk/spdk_pid85274 00:32:48.985 Removing: /var/run/dpdk/spdk_pid85310 00:32:48.985 Removing: /var/run/dpdk/spdk_pid85475 00:32:48.985 Removing: /var/run/dpdk/spdk_pid85495 00:32:48.985 Removing: /var/run/dpdk/spdk_pid85531 00:32:48.985 Removing: /var/run/dpdk/spdk_pid85786 00:32:48.985 Removing: /var/run/dpdk/spdk_pid85999 00:32:48.985 Removing: /var/run/dpdk/spdk_pid86665 00:32:48.985 Removing: /var/run/dpdk/spdk_pid87454 00:32:48.985 Removing: /var/run/dpdk/spdk_pid88075 00:32:48.985 Removing: /var/run/dpdk/spdk_pid88879 00:32:48.985 Removing: /var/run/dpdk/spdk_pid89024 00:32:48.985 Removing: /var/run/dpdk/spdk_pid89104 00:32:48.985 Removing: /var/run/dpdk/spdk_pid89625 00:32:48.985 Removing: /var/run/dpdk/spdk_pid89681 00:32:48.985 Removing: /var/run/dpdk/spdk_pid90466 00:32:48.985 Removing: /var/run/dpdk/spdk_pid90920 00:32:48.985 Removing: /var/run/dpdk/spdk_pid91716 00:32:48.985 Removing: /var/run/dpdk/spdk_pid91839 00:32:48.985 Removing: /var/run/dpdk/spdk_pid91872 00:32:48.985 Removing: /var/run/dpdk/spdk_pid91925 00:32:48.985 Removing: /var/run/dpdk/spdk_pid91975 00:32:48.985 Removing: /var/run/dpdk/spdk_pid92023 00:32:48.985 Removing: /var/run/dpdk/spdk_pid92189 00:32:48.985 Removing: /var/run/dpdk/spdk_pid92261 00:32:48.985 Removing: /var/run/dpdk/spdk_pid92328 00:32:48.985 Removing: /var/run/dpdk/spdk_pid92422 00:32:48.985 Removing: /var/run/dpdk/spdk_pid92457 00:32:48.985 Removing: /var/run/dpdk/spdk_pid92513 00:32:48.985 Removing: /var/run/dpdk/spdk_pid92650 00:32:48.985 Removing: /var/run/dpdk/spdk_pid92860 00:32:48.985 Removing: /var/run/dpdk/spdk_pid93328 00:32:48.985 Removing: /var/run/dpdk/spdk_pid94025 00:32:48.985 Removing: /var/run/dpdk/spdk_pid94586 00:32:48.985 Removing: /var/run/dpdk/spdk_pid95407 00:32:48.985 Clean 00:32:48.985 07:06:42 -- common/autotest_common.sh@1453 -- # return 0 00:32:48.985 07:06:42 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:32:48.985 07:06:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:48.985 07:06:42 -- common/autotest_common.sh@10 -- # set +x 00:32:49.246 07:06:42 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:32:49.246 07:06:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:49.246 07:06:42 -- common/autotest_common.sh@10 -- # set +x 00:32:49.246 07:06:42 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:49.246 07:06:42 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:49.246 07:06:42 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:49.246 07:06:42 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:32:49.246 07:06:42 -- spdk/autotest.sh@398 -- # hostname 00:32:49.246 07:06:42 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:49.246 geninfo: WARNING: invalid characters removed from testname! 00:33:15.837 07:07:07 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:17.750 07:07:10 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:21.054 07:07:13 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:23.603 07:07:16 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:26.153 07:07:18 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:28.701 07:07:21 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:31.250 07:07:24 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:31.250 07:07:24 -- spdk/autorun.sh@1 -- $ timing_finish 00:33:31.250 07:07:24 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:33:31.250 07:07:24 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:31.250 07:07:24 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:33:31.250 07:07:24 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:31.250 + [[ -n 5767 ]] 00:33:31.250 + sudo kill 5767 00:33:31.261 [Pipeline] } 00:33:31.278 [Pipeline] // timeout 00:33:31.283 [Pipeline] } 00:33:31.298 [Pipeline] // stage 00:33:31.303 [Pipeline] } 00:33:31.318 [Pipeline] // catchError 00:33:31.327 [Pipeline] stage 00:33:31.330 [Pipeline] { (Stop VM) 00:33:31.342 [Pipeline] sh 00:33:31.628 + vagrant halt 00:33:34.176 ==> default: Halting domain... 00:33:40.775 [Pipeline] sh 00:33:41.058 + vagrant destroy -f 00:33:43.594 ==> default: Removing domain... 00:33:44.180 [Pipeline] sh 00:33:44.536 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:33:44.547 [Pipeline] } 00:33:44.561 [Pipeline] // stage 00:33:44.566 [Pipeline] } 00:33:44.579 [Pipeline] // dir 00:33:44.584 [Pipeline] } 00:33:44.598 [Pipeline] // wrap 00:33:44.605 [Pipeline] } 00:33:44.617 [Pipeline] // catchError 00:33:44.629 [Pipeline] stage 00:33:44.631 [Pipeline] { (Epilogue) 00:33:44.646 [Pipeline] sh 00:33:44.933 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:50.236 [Pipeline] catchError 00:33:50.238 [Pipeline] { 00:33:50.252 [Pipeline] sh 00:33:50.538 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:50.538 Artifacts sizes are good 00:33:50.549 [Pipeline] } 00:33:50.564 [Pipeline] // catchError 00:33:50.578 [Pipeline] archiveArtifacts 00:33:50.586 Archiving artifacts 00:33:50.700 [Pipeline] cleanWs 00:33:50.714 [WS-CLEANUP] Deleting project workspace... 00:33:50.714 [WS-CLEANUP] Deferred wipeout is used... 00:33:50.721 [WS-CLEANUP] done 00:33:50.723 [Pipeline] } 00:33:50.739 [Pipeline] // stage 00:33:50.744 [Pipeline] } 00:33:50.758 [Pipeline] // node 00:33:50.763 [Pipeline] End of Pipeline 00:33:50.812 Finished: SUCCESS